Launcher (#48)

* launcher in linux

* silence stdout and stderr linux

* arg parser and other changes

* retry instead of sleep

* no_std fixes

* reordered includes

* launcher for windows and kill clients when broker returns

* cargo fmt

* started launcher api cleanup

* use closures instead of functions

* small change

* reordered launcher params

* fixed clippy warnings

* fixed no_std

* moved launcher example to own folder

* docu

* cleanup launcher

* more docs

* Fix merge issues

* Rework the launcher code to provide a cleaner API

* Open file before spawning clients

* launcher: fix merge issue, sleep for a different amount for each core

* fixed no_std

* Tcp Broker to Broker Communication (#66)

* initial b2b implementation

* no_std and clippy fixes

* b2b testcase added

* more correct testcases

* fixed b2b

* typo

* fixed unused warning

* some clippy warning ignored

* using clippy.sh

* Update README.md

* fixed clippy run in workflow

* fixing clippy::match-same-arms

* make clippy less pedantic

* fixed some minor typos in the book

* launcher: use s1341's fork of core_affinity

* Build warning fix proposal, mostly about reference to packed fields. (#79)

* Observers refactor (#84)

* new observer structure with HasExecHooks

* adapt libafl_frida to new observers

* docstrings

* Composing feedback (#85)

* composing feedbacks as logic operations and bump to 0.2

* adapt fuzzers and libafl_frida

* fix windows build

* fixed clippy warnings

* Frida suppress instrumentation locations option (#87)

* Implement  frida option

* Format

* add append/discard_metadata for and/or/not feedback (#86)

* add append/discard_metadata for and/or/not feedback

* fix

* Call append_metadata on crash (#88)

* Call append_metadata on crash

* Formatting

* Reachability example (#65)

* add reachability observer/feedback

* add fuzzer exmaple

* fmt

* remove reachabilityobserver, use stdmapobserver instead

* update diff.patch

* update README

* fix the clippy warning

* Squashed commit of the following:

commit f20524ebd77011481e86b420c925e8504bd11308
Author: Andrea Fioraldi <andreafioraldi@gmail.com>
Date:   Tue May 4 16:00:39 2021 +0200

    Composing feedback (#85)

    * composing feedbacks as logic operations and bump to 0.2

    * adapt fuzzers and libafl_frida

    * fix windows build

commit e06efaa03bc96ef71740d7376c7381572bf11c6c
Author: Andrea Fioraldi <andreafioraldi@gmail.com>
Date:   Tue May 4 13:54:46 2021 +0200

    Observers refactor (#84)

    * new observer structure with HasExecHooks

    * adapt libafl_frida to new observers

    * docstrings

commit 17c6fcd31cb746c099654be2b7a168bd04d46381
Merge: 08a2d43 a78a4b7
Author: Andrea Fioraldi <andreafioraldi@gmail.com>
Date:   Mon May 3 11:16:49 2021 +0200

    Merge branch 'main' into dev

commit 08a2d43790797d8864565fec99e7043289a46283
Author: David CARLIER <devnexen@gmail.com>
Date:   Mon May 3 10:15:28 2021 +0100

    Build warning fix proposal, mostly about reference to packed fields. (#79)

commit 88fe8fa532ac34cbc10782f5f71264f620385dda
Merge: d5d46ad d2e7719
Author: Andrea Fioraldi <andreafioraldi@gmail.com>
Date:   Mon May 3 11:05:42 2021 +0200

    Merge pull request #80 from marcograss/book-typos

    fixed some minor typos in the book

commit a78a4b73fa798c1ed7a3d053369cca435e57aa07
Author: s1341 <s1341@users.noreply.github.com>
Date:   Mon May 3 10:34:15 2021 +0300

    frida-asan: Un-inline report funclet to reduce code bloat (#81)

    * frida-asan: Outline report funclet to reduce code bloat

    * fmt

commit d2e7719a8bea3a993394c187e2183d3e91f02c75
Author: Marco Grassi <marco.gra@gmail.com>
Date:   Sun May 2 21:58:33 2021 +0800

    fixed some minor typos in the book

commit d5d46ad7e440fd4a2925352ed1ccb9ced5d9463d
Author: Dominik Maier <domenukk@gmail.com>
Date:   Sat May 1 23:09:10 2021 +0200

    make clippy less pedantic

commit 52d25e979e23589587c885803641058dc36aa998
Author: Dominik Maier <domenukk@gmail.com>
Date:   Sat May 1 22:23:59 2021 +0200

    fixing clippy::match-same-arms

commit cd66f880dea830d1e38e89fd1bf3c20fd89c9d70
Author: Dominik Maier <domenukk@gmail.com>
Date:   Sat May 1 14:02:07 2021 +0200

    fixed clippy run in workflow

commit ddcf086acde2b703c36e4ec3976588313fc3d591
Author: Dominik Maier <domenukk@gmail.com>
Date:   Sat May 1 13:53:29 2021 +0200

    Update README.md

commit c715f1fe6e42942e53bd13ea6a23214620f6c829
Author: Dominik Maier <domenukk@gmail.com>
Date:   Sat May 1 13:48:38 2021 +0200

    using clippy.sh

commit 9374b26b1d2d44c6042fdd653a8d960ce698592c
Author: Dominik Maier <domenukk@gmail.com>
Date:   Sat May 1 13:47:44 2021 +0200

    some clippy warning ignored

commit b9e75c0c98fdfb1e70778e6f3612a94b71dcd21a
Author: Dominik Maier <domenukk@gmail.com>
Date:   Sat May 1 13:24:02 2021 +0200

    Tcp Broker to Broker Communication (#66)

    * initial b2b implementation

    * no_std and clippy fixes

    * b2b testcase added

    * more correct testcases

    * fixed b2b

    * typo

    * fixed unused warning

* feedbacks now return a boolean value

* use feedback_or, and modify Cargo.toml

* fix diff between dev and this branch

* fmt

Co-authored-by: Dominik Maier <domenukk@gmail.com>

* clippy fixes

* clippy fixes

* clippy fixes, x86_64 warnings

* more docs

* Observers lifetime (#89)

* introduce MatchName and alow lifetimes in observers

* adapt fuzzers to observers with lifetime

* introduce type_eq when on nightly

* fix no_std

* fmt

* Better docu (#90)

* more docs

* more docs:

* more docu

* more docu

* finished docs

* cleaned up markup

* must_use tags added

* more docs

* more docu, less clippy

* more fixes

* Clippy fixes (#92)

* more docs

* more docs:

* more docu

* more docu

* finished docs

* cleaned up markup

* must_use tags added

* more docs

* swapped if/else, as per clippy

* more docu, less clippy

* more fixes

* Fix merge issues

* Get rid of unneeded prints

* Fix merge errors

* added b2b to restarting interface

* Setting SO_REUSEPORT

* added b2b to launcher api

* more windows launcher

* Fix merge errors

* Add b2b support to frida_libpng

* make frida_libpng bind to a public address

* Convert launcher into a builder LauncherBuilder

* formatting

* Convert setup_restarting_mgr to a builder RestartingMgrBuilder; leave setup_restarting_mgr_std as is, so that fuzzers work

* RcShmem should be locked via a mutex

* Wait at least 1 second between broker and first client, to avoid race

* update frida_libpng README for cross-compiling to android (#100)

Co-authored-by: Ariel Zentner <ArielZ@nsogroup.com>

* Fixed build for Windows

* no_std fixes

* reverted aa6773dcade93b3a66ce86e6b2cc75f55ce194e7 & windows fixes

* added pipes, moving to remove race conditions for rc shmem

* fix unix build

* fixed clippy:

* fixed no_std once more

* renamed b2b to remote_broker_addr

* you get a pre_fork, and you get a post_fork, forks for everyone

* switched to typed_builder

* Fix merge isseu

* Fix frida fuzzer with new Launcher builder

* Introspection (#97)

* Rework to put `ClientPerfStats` in `State` and pass that along. Still need to work on getting granular information from `Feedback` and `Observer`

* Add perf_stats feature to libafl/Cargo.toml

* Update feedbacks to have with_perf

* Remove unneeeded print statement

* cargo fmt all the things

* use local llvmint vs cpu specific asm for reading cycle counter

* Remove debug testing code

* Stats timeout to 3 seconds

* Inline smallish functions for ClientPerfStats

* Remove .libs/llvmint and have the correct conditional compilation of link_llvm_intrinsics on the perf_stats feature

* pub(crate) the NUM_FEEDBACK and NUM_STAGES consts

* Tcp Broker to Broker Communication (#66)

* initial b2b implementation

* no_std and clippy fixes

* b2b testcase added

* more correct testcases

* fixed b2b

* typo

* fixed unused warning

* clippy fixes

* fallback to systemtime on non-x86

* make clippy more strict

* small fixes

* bump 0.2.1

* readme

Co-authored-by: ctfhacker <cld251@gmail.com>
Co-authored-by: Dominik Maier <domenukk@gmail.com>

* typos (please review)

* merged clippy.sh

* utils

* Add asan cores option (#102)

* added asan-cores option for frida fuzzer

When asan is enabled (via LIBBAFL_FRIDA_OPTIONS enable-asan), you can
filter exactly which of the cores asan should run on with the
asan-cores variable.

* add is_some check instead of !None

Co-authored-by: Ariel Zentner <ArielZ@nsogroup.com>

* moved utils to bolts

* fixed typo

* no_std fixes

* unix fixes

* fixed unix no_std build

* fix llmp.rs

* adapt libfuzzer_libpng_launcher

* added all fuzzers to ci

* fmt, improved ci

* tests crate not ready for prime time

* clippy fixes

* make ci script executable

* trying to fix example fuzzers

* working libfuzzer_libpng_laucnher

* frida_libpng builds

* clippy

* bump version

* fix no_std

* fix dep version

* clippy fixes

* more fies

* clippy++

* warn again

* clearer readme

Co-authored-by: Vimal Joseph <vimaljoseph027@gmail.com>
Co-authored-by: Dominik Maier <domenukk@gmail.com>
Co-authored-by: s1341 <github@shmarya.net>
Co-authored-by: Marco Grassi <marco.gra@gmail.com>
Co-authored-by: s1341 <s1341@users.noreply.github.com>
Co-authored-by: Andrea Fioraldi <andreafioraldi@gmail.com>
Co-authored-by: David CARLIER <devnexen@gmail.com>
Co-authored-by: Toka <tokazerkje@outlook.com>
Co-authored-by: r-e-l-z <azentner@gmail.com>
Co-authored-by: Ariel Zentner <ArielZ@nsogroup.com>
Co-authored-by: ctfhacker <cld251@gmail.com>
Co-authored-by: hexcoder <hexcoder-@users.noreply.github.com>
This commit is contained in:
Vimal Joseph 2021-05-19 16:38:24 +05:30 committed by GitHub
parent b51936397b
commit d991395c81
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
73 changed files with 2066 additions and 692 deletions

View File

@ -60,6 +60,8 @@ jobs:
run: cargo test --all-features --doc
- name: Run clippy
run: ./clippy.sh
- name: Build fuzzers
run: ./build_all_fuzzers.sh
windows:
runs-on: windows-latest
steps:

16
build_all_fuzzers.sh Executable file
View File

@ -0,0 +1,16 @@
#!/bin/sh
# TODO: This should be rewritten in rust, a Makefile, or some platform-independent language
cd fuzzers
for fuzzer in *;
do
echo "[+] Checking fmt, clippy, and building $fuzzer"
cd $fuzzer \
&& cargo fmt --all -- --check \
&& ../../clippy.sh --no-clean \
&& cargo build \
&& cd .. \
|| exit 1
done

View File

@ -1,14 +1,18 @@
#!/bin/sh
# Clippy checks
cargo clean -p libafl
if [ "$1" != "--no-clean" ]; then
# Usually, we want to clean, since clippy won't work otherwise.
echo "[+] Cleaning up previous builds..."
cargo clean -p libafl
fi
RUST_BACKTRACE=full cargo clippy --all --all-features --tests -- \
-D clippy::pedantic \
-W clippy::similar-names \
-W clippy::unused_self \
-W clippy::too_many_lines \
-W clippy::option_if_let_else \
-W clippy::must-use-candidate \
-W clippy::if-not-else \
-W clippy::similar-names \
-A clippy::type_repetition_in_bounds \
-A clippy::missing-errors-doc \
-A clippy::cast-possible-truncation \

View File

@ -156,13 +156,13 @@ Now you can prepend the following `use` directives to your main.rs and compile i
```rust
use std::path::PathBuf;
use libafl::{
bolts::{current_nanos, rands::StdRand},
corpus::{InMemoryCorpus, OnDiskCorpus, QueueCorpusScheduler},
events::SimpleEventManager,
executors::{inprocess::InProcessExecutor, ExitKind},
generators::RandPrintablesGenerator,
state::State,
stats::SimpleStats,
utils::{current_nanos, StdRand},
};
```

View File

@ -1,6 +1,6 @@
[package]
name = "baby_fuzzer"
version = "0.2.0"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
edition = "2018"

View File

@ -1,7 +1,7 @@
use std::path::PathBuf;
use libafl::{
bolts::tuples::tuple_list,
bolts::{current_nanos, rands::StdRand, tuples::tuple_list},
corpus::{InMemoryCorpus, OnDiskCorpus, QueueCorpusScheduler},
events::SimpleEventManager,
executors::{inprocess::InProcessExecutor, ExitKind},
@ -13,25 +13,26 @@ use libafl::{
stages::mutational::StdMutationalStage,
state::StdState,
stats::SimpleStats,
utils::{current_nanos, StdRand},
};
// Coverage map with explicit assignments due to the lack of instrumentation
/// Coverage map with explicit assignments due to the lack of instrumentation
static mut SIGNALS: [u8; 16] = [0; 16];
/// Assign a signal to the signals map
fn signals_set(idx: usize) {
unsafe { SIGNALS[idx] = 1 };
}
#[allow(clippy::similar_names)]
pub fn main() {
// The closure that we want to fuzz
let mut harness = |buf: &[u8]| {
signals_set(0);
if buf.len() > 0 && buf[0] == 'a' as u8 {
if !buf.is_empty() && buf[0] == b'a' {
signals_set(1);
if buf.len() > 1 && buf[1] == 'b' as u8 {
if buf.len() > 1 && buf[1] == b'b' {
signals_set(2);
if buf.len() > 2 && buf[2] == 'c' as u8 {
if buf.len() > 2 && buf[2] == b'c' {
panic!("=)");
}
}
@ -86,7 +87,7 @@ pub fn main() {
&mut state,
&mut mgr,
)
.expect("Failed to create the Executor".into());
.expect("Failed to create the Executor");
// Generator of printable bytearrays of max size 32
let mut generator = RandPrintablesGenerator::new(32);
@ -94,7 +95,7 @@ pub fn main() {
// Generate 8 initial inputs
state
.generate_initial_inputs(&mut fuzzer, &mut executor, &mut generator, &mut mgr, 8)
.expect("Failed to generate the initial corpus".into());
.expect("Failed to generate the initial corpus");
// Setup a mutational stage with a basic bytes mutator
let mutator = StdScheduledMutator::new(havoc_mutations());
@ -102,5 +103,5 @@ pub fn main() {
fuzzer
.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)
.expect("Error in the fuzzing loop".into());
.expect("Error in the fuzzing loop");
}

View File

@ -1,6 +1,6 @@
[package]
name = "frida_libpng"
version = "0.2.0"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
edition = "2018"
build = "build.rs"
@ -21,17 +21,18 @@ num_cpus = "1.0"
which = "4.1"
[target.'cfg(unix)'.dependencies]
libafl = { path = "../../libafl/", features = [ "std", "llmp_compression" ] } #, "llmp_small_maps", "llmp_debug"]}
libafl = { path = "../../libafl/", features = [ "std", "llmp_compression", "llmp_bind_public" ] } #, "llmp_small_maps", "llmp_debug"]}
libafl_frida = { path = "../../libafl_frida" }
capstone = "0.8.0"
frida-gum = { version = "0.4", git = "https://github.com/s1341/frida-rust", features = [ "auto-download", "event-sink", "invocation-listener"] }
frida-gum = { version = "0.4.1", git = "https://github.com/frida/frida-rust", features = [ "auto-download", "event-sink", "invocation-listener"] }
#frida-gum = { version = "0.4", path = "../../../frida-rust/frida-gum", features = [ "auto-download", "event-sink", "invocation-listener"] }
libafl_frida = { path = "../../libafl_frida", version = "0.2.0" }
lazy_static = "1.4.0"
libc = "0.2"
libloading = "0.7.0"
num-traits = "0.2.14"
rangemap = "0.1.10"
seahash = "4.1.0"
clap = "2.33"
serde = "1.0"
backtrace = "0.3"

View File

@ -11,6 +11,15 @@ This will call (the build.rs)[./build.rs], which in turn downloads a libpng arch
Then, it will link (the fuzzer)[./src/fuzzer.rs] against (the C++ harness)[./harness.cc] and the instrumented `libpng`.
Afterwards, the fuzzer will be ready to run, from `../../target/examples/libfuzzer_libpng`.
### Build For Android
When building for android using a cross-compiler, make sure you have a _standalone toolchain_, and then add the following:
1. In the ~/.cargo/config file add a target with the correct cross-compiler toolchain name (in this case aarch64-linux-android, but names may vary)
`[target.aarch64-linux-android]`
`linker="aarch64-linux-android-clang"`
2. add path to installed toolchain to PATH env variable.
3. define CLANG_PATH and add target to the build command line:
`CLANG_PATH=<path to installed toolchain>/bin/aarch64-linux-android-clang cargo -v build --release --target=aarch64-linux-android`
## Run
The first time you run the binary, the broker will open a tcp port (currently on port `1337`), waiting for fuzzer clients to connect. This port is local and only used for the initial handshake. All further communication happens via shared map, to be independent of the kernel.

View File

@ -12,15 +12,14 @@ const LIBPNG_URL: &str =
"https://deac-fra.dl.sourceforge.net/project/libpng/libpng16/1.6.37/libpng-1.6.37.tar.xz";
fn build_dep_check(tools: &[&str]) {
for tool in tools.into_iter() {
for tool in tools {
println!("Checking for build tool {}...", tool);
match which(tool) {
Ok(path) => println!("Found build tool {}", path.to_str().unwrap()),
Err(_) => {
if let Ok(path) = which(tool) {
println!("Found build tool {}", path.to_str().unwrap())
} else {
println!("ERROR: missing build tool {}", tool);
exit(1);
}
};
}
}
@ -35,7 +34,7 @@ fn main() {
let cwd = env::current_dir().unwrap().to_string_lossy().to_string();
let out_dir = out_dir.to_string_lossy().to_string();
let out_dir_path = Path::new(&out_dir);
std::fs::create_dir_all(&out_dir).expect(&format!("Failed to create {}", &out_dir));
std::fs::create_dir_all(&out_dir).unwrap_or_else(|_| panic!("Failed to create {}", &out_dir));
println!("cargo:rerun-if-changed=build.rs");
println!("cargo:rerun-if-changed=../libfuzzer_runtime/rt.c",);

View File

@ -1,13 +1,24 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for libpng.
use clap::{App, Arg};
#[cfg(target_os = "android")]
use libafl::bolts::os::ashmem_server::AshmemService;
use libafl::{
bolts::tuples::tuple_list,
bolts::{
current_nanos,
launcher::Launcher,
os::parse_core_bind_arg,
rands::StdRand,
shmem::{ShMemProvider, StdShMemProvider},
tuples::tuple_list,
},
corpus::{
ondisk::OnDiskMetadataFormat, Corpus, InMemoryCorpus,
IndexesLenTimeMinimizerCorpusScheduler, OnDiskCorpus, QueueCorpusScheduler,
},
events::setup_restarting_mgr_std,
executors::{
inprocess::InProcessExecutor, timeout::TimeoutExecutor, Executor, ExitKind, HasExecHooks,
HasExecHooksTuple, HasObservers, HasObserversHooks,
@ -16,13 +27,14 @@ use libafl::{
feedbacks::{CrashFeedback, MapFeedbackState, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
fuzzer::{Fuzzer, StdFuzzer},
inputs::{HasTargetBytes, Input},
mutators::scheduled::{havoc_mutations, StdScheduledMutator},
mutators::token_mutations::Tokens,
mutators::{
scheduled::{havoc_mutations, StdScheduledMutator},
token_mutations::Tokens,
},
observers::{HitcountsMapObserver, ObserversTuple, StdMapObserver, TimeObserver},
stages::mutational::StdMutationalStage,
state::{HasCorpus, HasMetadata, StdState},
stats::SimpleStats,
utils::{current_nanos, StdRand},
Error,
};
@ -31,7 +43,14 @@ use frida_gum::{
Gum, NativePointer,
};
use std::{env, ffi::c_void, marker::PhantomData, path::PathBuf, time::Duration};
use std::{
env,
ffi::c_void,
marker::PhantomData,
net::SocketAddr,
path::{Path, PathBuf},
time::Duration,
};
use libafl_frida::{
asan_rt::{AsanErrorsFeedback, AsanErrorsObserver, ASAN_ERRORS},
@ -67,17 +86,20 @@ where
#[inline]
fn run_target(&mut self, input: &I) -> Result<ExitKind, Error> {
if self.helper.stalker_enabled() {
if !self.followed {
self.followed = true;
self.stalker
.follow_me::<NoneEventSink>(self.helper.transformer(), None);
} else {
if self.followed {
self.stalker.activate(NativePointer(
self.base.inner().harness_mut() as *mut _ as *mut c_void
))
} else {
self.followed = true;
self.stalker
.follow_me::<NoneEventSink>(self.helper.transformer(), None);
}
}
let res = self.base.run_target(input);
if self.helper.stalker_enabled() {
self.stalker.deactivate();
}
if unsafe { ASAN_ERRORS.is_some() && !ASAN_ERRORS.as_ref().unwrap().is_empty() } {
println!("Crashing target as it had ASAN errors");
unsafe {
@ -196,25 +218,71 @@ pub fn main() {
// Needed only on no_std
//RegistryBuilder::register::<Tokens>();
let matches = App::new("libafl_frida")
.version("0.1.0")
.arg(
Arg::with_name("cores")
.short("c")
.long("cores")
.value_name("CORES")
.required(true)
.takes_value(true),
)
.arg(Arg::with_name("harness").required(true).index(1))
.arg(Arg::with_name("symbol").required(true).index(2))
.arg(
Arg::with_name("modules_to_instrument")
.required(true)
.index(3),
)
.arg(
Arg::with_name("output")
.short("o")
.long("output")
.value_name("OUTPUT")
.required(false)
.takes_value(true),
)
.arg(
Arg::with_name("b2baddr")
.short("B")
.long("b2baddr")
.value_name("B2BADDR")
.required(false)
.takes_value(true),
)
.get_matches();
let cores = parse_core_bind_arg(&matches.value_of("cores").unwrap().to_string()).unwrap();
color_backtrace::install();
println!(
"Workdir: {:?}",
env::current_dir().unwrap().to_string_lossy().to_string()
);
let broker_addr = matches
.value_of("b2baddr")
.map(|addrstr| addrstr.parse().unwrap());
unsafe {
fuzz(
&env::args().nth(1).expect("no module specified"),
&env::args().nth(2).expect("no symbol specified"),
env::args()
.nth(3)
.expect("no modules to instrument specified")
matches.value_of("harness").unwrap(),
matches.value_of("symbol").unwrap(),
&matches
.value_of("modules_to_instrument")
.unwrap()
.split(':')
.map(|module_name| std::fs::canonicalize(module_name).unwrap())
.collect(),
&vec![PathBuf::from("./corpus")],
PathBuf::from("./crashes"),
.collect::<Vec<_>>(),
//modules_to_instrument,
&[PathBuf::from("./corpus")],
&PathBuf::from("./crashes"),
1337,
&cores,
matches.value_of("output"),
broker_addr,
)
.expect("An error occurred while fuzzing");
}
@ -222,56 +290,65 @@ pub fn main() {
/// Not supported on windows right now
#[cfg(windows)]
#[allow(clippy::too_many_arguments)]
fn fuzz(
_module_name: &str,
_symbol_name: &str,
_corpus_dirs: Vec<PathBuf>,
_objective_dir: PathBuf,
_corpus_dirs: &[PathBuf],
_objective_dir: &Path,
_broker_port: u16,
_cores: &[usize],
_stdout_file: Option<&str>,
_broker_addr: Option<SocketAddr>,
) -> Result<(), ()> {
todo!("Example not supported on Windows");
}
/// The actual fuzzer
#[cfg(unix)]
#[allow(clippy::too_many_lines, clippy::clippy::too_many_arguments)]
unsafe fn fuzz(
module_name: &str,
symbol_name: &str,
modules_to_instrument: Vec<PathBuf>,
corpus_dirs: &Vec<PathBuf>,
objective_dir: PathBuf,
modules_to_instrument: &[PathBuf],
corpus_dirs: &[PathBuf],
objective_dir: &Path,
broker_port: u16,
cores: &[usize],
stdout_file: Option<&str>,
broker_addr: Option<SocketAddr>,
) -> Result<(), Error> {
let stats_closure = |s| println!("{}", s);
// 'While the stats are state, they are usually used in the broker - which is likely never restarted
let stats = SimpleStats::new(|s| println!("{}", s));
let stats = SimpleStats::new(stats_closure);
#[cfg(target_os = "android")]
AshmemService::start().expect("Failed to start Ashmem service");
let shmem_provider = StdShMemProvider::new()?;
let mut client_init_stats = || Ok(SimpleStats::new(stats_closure));
let mut run_client = |state: Option<StdState<_, _, _, _, _>>, mut mgr| {
// The restarting state will spawn the same process again as child, then restarted it each time it crashes.
let (state, mut restarting_mgr) = match setup_restarting_mgr_std(stats, broker_port) {
Ok(res) => res,
Err(err) => match err {
Error::ShuttingDown => {
return Ok(());
}
_ => {
panic!("Failed to setup the restarter: {}", err);
}
},
};
let gum = Gum::obtain();
let lib = libloading::Library::new(module_name).unwrap();
let target_func: libloading::Symbol<unsafe extern "C" fn(data: *const u8, size: usize) -> i32> =
lib.get(symbol_name.as_bytes()).unwrap();
let target_func: libloading::Symbol<
unsafe extern "C" fn(data: *const u8, size: usize) -> i32,
> = lib.get(symbol_name.as_bytes()).unwrap();
let mut frida_harness = move |buf: &[u8]| {
(target_func)(buf.as_ptr(), buf.len());
ExitKind::Ok
};
let gum = Gum::obtain();
let frida_options = FridaOptions::parse_env_options();
let mut frida_helper =
FridaInstrumentationHelper::new(&gum, &frida_options, module_name, &modules_to_instrument);
let mut frida_helper = FridaInstrumentationHelper::new(
&gum,
&frida_options,
module_name,
&modules_to_instrument,
);
// Create an observation channel using the coverage map
let edges_observer = HitcountsMapObserver::new(StdMapObserver::new_from_ptr(
@ -283,12 +360,7 @@ unsafe fn fuzz(
// Create an observation channel to keep track of the execution time
let time_observer = TimeObserver::new("time");
// Create an observation channel for ASan violations
let asan_observer = AsanErrorsObserver::new(&ASAN_ERRORS);
// The state of the edges feedback.
let feedback_state = MapFeedbackState::with_observer(&edges_observer);
// Feedback to rate the interestingness of an input
// This one is composed by two Feedbacks in OR
let feedback = feedback_or!(
@ -314,7 +386,10 @@ unsafe fn fuzz(
InMemoryCorpus::new(),
// Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new_save_meta(objective_dir, Some(OnDiskMetadataFormat::JsonPretty))
OnDiskCorpus::new_save_meta(
objective_dir.to_path_buf(),
Some(OnDiskMetadataFormat::JsonPretty),
)
.unwrap(),
// States of the feedbacks.
// They are the data related to the feedbacks that you want to persist in the State.
@ -346,16 +421,19 @@ unsafe fn fuzz(
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
frida_helper.register_thread();
// Create the executor for an in-process function with just one observer for edge coverage
let mut executor = FridaInProcessExecutor::new(
&gum,
InProcessExecutor::new(
&mut frida_harness,
tuple_list!(edges_observer, time_observer, asan_observer),
tuple_list!(
edges_observer,
time_observer,
AsanErrorsObserver::new(&ASAN_ERRORS)
),
&mut fuzzer,
&mut state,
&mut restarting_mgr,
&mut mgr,
)?,
&mut frida_helper,
Duration::new(10, 0),
@ -373,21 +451,24 @@ unsafe fn fuzz(
// In case the corpus is empty (on first run), reset
if state.corpus().count() < 1 {
state
.load_initial_inputs(
&mut fuzzer,
&mut executor,
&mut restarting_mgr,
&corpus_dirs,
)
.expect(&format!(
"Failed to load initial corpus at {:?}",
&corpus_dirs
));
.load_initial_inputs(&mut fuzzer, &mut executor, &mut mgr, &corpus_dirs)
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &corpus_dirs));
println!("We imported {} inputs from disk.", state.corpus().count());
}
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut restarting_mgr)?;
// Never reached
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(())
};
Launcher::builder()
.shmem_provider(shmem_provider)
.stats(stats)
.client_init_stats(&mut client_init_stats)
.run_client(&mut run_client)
.cores(cores)
.broker_port(broker_port)
.stdout_file(stdout_file)
.remote_broker_addr(broker_addr)
.build()
.launch()
}

View File

@ -1,6 +1,6 @@
[package]
name = "libfuzzer_libmozjpeg"
version = "0.2.0"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
edition = "2018"

View File

@ -4,7 +4,7 @@
use std::{env, path::PathBuf};
use libafl::{
bolts::tuples::tuple_list,
bolts::{current_nanos, rands::StdRand, tuples::tuple_list},
corpus::{Corpus, InMemoryCorpus, OnDiskCorpus, RandCorpusScheduler},
events::setup_restarting_mgr_std,
executors::{inprocess::InProcessExecutor, ExitKind},
@ -17,7 +17,6 @@ use libafl::{
stages::mutational::StdMutationalStage,
state::{HasCorpus, HasMetadata, StdState},
stats::SimpleStats,
utils::{current_nanos, StdRand},
Error,
};
@ -42,7 +41,7 @@ pub fn main() {
env::current_dir().unwrap().to_string_lossy().to_string()
);
fuzz(
vec![PathBuf::from("./corpus")],
&[PathBuf::from("./corpus")],
PathBuf::from("./crashes"),
1337,
)
@ -50,13 +49,13 @@ pub fn main() {
}
/// The actual fuzzer
fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
fn fuzz(corpus_dirs: &[PathBuf], objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
// 'While the stats are state, they are usually used in the broker - which is likely never restarted
let stats = SimpleStats::new(|s| println!("{}", s));
// The restarting state will spawn the same process again as child, then restarted it each time it crashes.
let (state, mut restarting_mgr) =
setup_restarting_mgr_std(stats, broker_port).expect("Failed to setup the restarter".into());
setup_restarting_mgr_std(stats, broker_port).expect("Failed to setup the restarter");
// Create an observation channel using the coverage map
let edges = unsafe { &mut EDGES_MAP[0..MAX_EDGES_NUM] };
@ -155,10 +154,7 @@ fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) ->
&mut restarting_mgr,
&corpus_dirs,
)
.expect(&format!(
"Failed to load initial corpus at {:?}",
&corpus_dirs
));
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &corpus_dirs));
println!("We imported {} inputs from disk.", state.corpus().count());
}

View File

@ -1,6 +1,6 @@
[package]
name = "libfuzzer_libpng"
version = "0.2.0"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
edition = "2018"

View File

@ -6,6 +6,7 @@ use std::{env, path::PathBuf};
use libafl::{
bolts::tuples::tuple_list,
bolts::{current_nanos, rands::StdRand},
corpus::{
Corpus, InMemoryCorpus, IndexesLenTimeMinimizerCorpusScheduler, OnDiskCorpus,
QueueCorpusScheduler,
@ -21,13 +22,12 @@ use libafl::{
stages::mutational::StdMutationalStage,
state::{HasCorpus, HasMetadata, StdState},
stats::SimpleStats,
utils::{current_nanos, StdRand},
Error,
};
use libafl_targets::{libfuzzer_initialize, libfuzzer_test_one_input, EDGES_MAP, MAX_EDGES_NUM};
/// The main fn, no_mangle as it is a C main
/// The main fn, `no_mangle` as it is a C main
#[no_mangle]
pub fn main() {
// Registry the metadata types used in this fuzzer
@ -39,7 +39,7 @@ pub fn main() {
env::current_dir().unwrap().to_string_lossy().to_string()
);
fuzz(
vec![PathBuf::from("./corpus")],
&[PathBuf::from("./corpus")],
PathBuf::from("./crashes"),
1337,
)
@ -47,7 +47,7 @@ pub fn main() {
}
/// The actual fuzzer
fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
fn fuzz(corpus_dirs: &[PathBuf], objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
// 'While the stats are state, they are usually used in the broker - which is likely never restarted
let stats = SimpleStats::new(|s| println!("{}", s));
@ -160,10 +160,7 @@ fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) ->
&mut restarting_mgr,
&corpus_dirs,
)
.expect(&format!(
"Failed to load initial corpus at {:?}",
&corpus_dirs
));
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &corpus_dirs));
println!("We imported {} inputs from disk.", state.corpus().count());
}

View File

@ -0,0 +1 @@
libpng-*

View File

@ -0,0 +1,31 @@
[package]
name = "libfuzzer_libpng_launcher"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
edition = "2018"
[features]
default = ["std"]
std = []
#[profile.release]
#lto = true
#codegen-units = 1
#opt-level = 3
#debug = true
[build-dependencies]
cc = { version = "1.0", features = ["parallel"] }
which = { version = "4.0.2" }
num_cpus = "1.0"
[dependencies]
libafl = { path = "../../libafl/" }
libafl_targets = { path = "../../libafl_targets/", features = ["pcguard_hitcounts", "libfuzzer"] }
# TODO Include it only when building cc
libafl_cc = { path = "../../libafl_cc/" }
clap = { version = "3.0.0-beta.2", features = ["yaml"] }
[lib]
name = "libfuzzer_libpng"
crate-type = ["staticlib"]

View File

@ -0,0 +1,47 @@
# Libfuzzer for libpng, with launcher
This folder contains an example fuzzer for libpng, using LLMP for fast multi-process fuzzing and crash detection.
To show off crash detection, we added a `ud2` instruction to the harness, edit harness.cc if you want a non-crashing example.
It has been tested on Linux.
In contrast to the normal libfuzzer libpng example, this uses the `launcher` feature, that automatically spawns `n` child processes, and binds them to a free core.
## Build
To build this example, run
```bash
cargo build --release
```
This will build the library with the fuzzer (src/lib.rs) with the libfuzzer compatibility layer and the SanitizerCoverage runtime functions for coverage feedback.
In addition, it will also build two C and C++ compiler wrappers (bin/libafl_c(libafl_c/xx).rs) that you must use to compile the target.
Then download libpng, and unpack the archive:
```bash
wget https://deac-fra.dl.sourceforge.net/project/libpng/libpng16/1.6.37/libpng-1.6.37.tar.xz
tar -xvf libpng-1.6.37.tar.xz
```
Now compile libpng, using the libafl_cc compiler wrapper:
```bash
cd libpng-1.6.37
./configure
make CC=../target/release/libafl_cc CXX=../target/release/libafl_cxx -j `nproc`
```
You can find the static lib at `libpng-1.6.37/.libs/libpng16.a`.
Now, we have to build the libfuzzer harness and link all together to create our fuzzer binary.
```
cd ..
./target/release/libafl_cxx ./harness.cc libpng-1.6.37/.libs/libpng16.a -I libpng-1.6.37/ -o fuzzer_libpng -lz -lm
```
Afterwards, the fuzzer will be ready to run.
## Run
Just run once, the launcher feature should do the rest.

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 376 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 427 B

View File

@ -0,0 +1,197 @@
// libpng_read_fuzzer.cc
// Copyright 2017-2018 Glenn Randers-Pehrson
// Copyright 2015 The Chromium Authors. All rights reserved.
// Use of this source code is governed by a BSD-style license that may
// be found in the LICENSE file https://cs.chromium.org/chromium/src/LICENSE
// Last changed in libpng 1.6.35 [July 15, 2018]
// The modifications in 2017 by Glenn Randers-Pehrson include
// 1. addition of a PNG_CLEANUP macro,
// 2. setting the option to ignore ADLER32 checksums,
// 3. adding "#include <string.h>" which is needed on some platforms
// to provide memcpy().
// 4. adding read_end_info() and creating an end_info structure.
// 5. adding calls to png_set_*() transforms commonly used by browsers.
#include <stddef.h>
#include <stdint.h>
#include <string.h>
#include <vector>
#define PNG_INTERNAL
#include "png.h"
#define PNG_CLEANUP \
if(png_handler.png_ptr) \
{ \
if (png_handler.row_ptr) \
png_free(png_handler.png_ptr, png_handler.row_ptr); \
if (png_handler.end_info_ptr) \
png_destroy_read_struct(&png_handler.png_ptr, &png_handler.info_ptr,\
&png_handler.end_info_ptr); \
else if (png_handler.info_ptr) \
png_destroy_read_struct(&png_handler.png_ptr, &png_handler.info_ptr,\
nullptr); \
else \
png_destroy_read_struct(&png_handler.png_ptr, nullptr, nullptr); \
png_handler.png_ptr = nullptr; \
png_handler.row_ptr = nullptr; \
png_handler.info_ptr = nullptr; \
png_handler.end_info_ptr = nullptr; \
}
struct BufState {
const uint8_t* data;
size_t bytes_left;
};
struct PngObjectHandler {
png_infop info_ptr = nullptr;
png_structp png_ptr = nullptr;
png_infop end_info_ptr = nullptr;
png_voidp row_ptr = nullptr;
BufState* buf_state = nullptr;
~PngObjectHandler() {
if (row_ptr)
png_free(png_ptr, row_ptr);
if (end_info_ptr)
png_destroy_read_struct(&png_ptr, &info_ptr, &end_info_ptr);
else if (info_ptr)
png_destroy_read_struct(&png_ptr, &info_ptr, nullptr);
else
png_destroy_read_struct(&png_ptr, nullptr, nullptr);
delete buf_state;
}
};
void user_read_data(png_structp png_ptr, png_bytep data, size_t length) {
BufState* buf_state = static_cast<BufState*>(png_get_io_ptr(png_ptr));
if (length > buf_state->bytes_left) {
png_error(png_ptr, "read error");
}
memcpy(data, buf_state->data, length);
buf_state->bytes_left -= length;
buf_state->data += length;
}
static const int kPngHeaderSize = 8;
// Entry point for LibFuzzer.
// Roughly follows the libpng book example:
// http://www.libpng.org/pub/png/book/chapter13.html
extern "C" int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size) {
if (size < kPngHeaderSize) {
return 0;
}
std::vector<unsigned char> v(data, data + size);
if (png_sig_cmp(v.data(), 0, kPngHeaderSize)) {
// not a PNG.
return 0;
}
PngObjectHandler png_handler;
png_handler.png_ptr = nullptr;
png_handler.row_ptr = nullptr;
png_handler.info_ptr = nullptr;
png_handler.end_info_ptr = nullptr;
png_handler.png_ptr = png_create_read_struct
(PNG_LIBPNG_VER_STRING, nullptr, nullptr, nullptr);
if (!png_handler.png_ptr) {
return 0;
}
png_handler.info_ptr = png_create_info_struct(png_handler.png_ptr);
if (!png_handler.info_ptr) {
PNG_CLEANUP
return 0;
}
png_handler.end_info_ptr = png_create_info_struct(png_handler.png_ptr);
if (!png_handler.end_info_ptr) {
PNG_CLEANUP
return 0;
}
png_set_crc_action(png_handler.png_ptr, PNG_CRC_QUIET_USE, PNG_CRC_QUIET_USE);
#ifdef PNG_IGNORE_ADLER32
png_set_option(png_handler.png_ptr, PNG_IGNORE_ADLER32, PNG_OPTION_ON);
#endif
// Setting up reading from buffer.
png_handler.buf_state = new BufState();
png_handler.buf_state->data = data + kPngHeaderSize;
png_handler.buf_state->bytes_left = size - kPngHeaderSize;
png_set_read_fn(png_handler.png_ptr, png_handler.buf_state, user_read_data);
png_set_sig_bytes(png_handler.png_ptr, kPngHeaderSize);
if (setjmp(png_jmpbuf(png_handler.png_ptr))) {
PNG_CLEANUP
return 0;
}
// Reading.
png_read_info(png_handler.png_ptr, png_handler.info_ptr);
// reset error handler to put png_deleter into scope.
if (setjmp(png_jmpbuf(png_handler.png_ptr))) {
PNG_CLEANUP
return 0;
}
png_uint_32 width, height;
int bit_depth, color_type, interlace_type, compression_type;
int filter_type;
if (!png_get_IHDR(png_handler.png_ptr, png_handler.info_ptr, &width,
&height, &bit_depth, &color_type, &interlace_type,
&compression_type, &filter_type)) {
PNG_CLEANUP
return 0;
}
// This is going to be too slow.
if (width && height > 100000000 / width) {
PNG_CLEANUP
#ifdef HAS_DUMMY_CRASH
#ifdef __aarch64__
asm volatile (".word 0xf7f0a000\n");
#else
asm("ud2");
#endif
#endif
return 0;
}
// Set several transforms that browsers typically use:
png_set_gray_to_rgb(png_handler.png_ptr);
png_set_expand(png_handler.png_ptr);
png_set_packing(png_handler.png_ptr);
png_set_scale_16(png_handler.png_ptr);
png_set_tRNS_to_alpha(png_handler.png_ptr);
int passes = png_set_interlace_handling(png_handler.png_ptr);
png_read_update_info(png_handler.png_ptr, png_handler.info_ptr);
png_handler.row_ptr = png_malloc(
png_handler.png_ptr, png_get_rowbytes(png_handler.png_ptr,
png_handler.info_ptr));
for (int pass = 0; pass < passes; ++pass) {
for (png_uint_32 y = 0; y < height; ++y) {
png_read_row(png_handler.png_ptr,
static_cast<png_bytep>(png_handler.row_ptr), nullptr);
}
}
png_read_end(png_handler.png_ptr, png_handler.end_info_ptr);
PNG_CLEANUP
return 0;
}

View File

@ -0,0 +1,33 @@
use libafl_cc::{ClangWrapper, CompilerWrapper, LIB_EXT, LIB_PREFIX};
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() > 1 {
let mut dir = env::current_exe().unwrap();
dir.pop();
let mut cc = ClangWrapper::new("clang", "clang++");
cc.from_args(&args)
.unwrap()
.add_arg("-fsanitize-coverage=trace-pc-guard".into())
.unwrap()
.add_link_arg(
dir.join(format!("{}libfuzzer_libpng.{}", LIB_PREFIX, LIB_EXT))
.display()
.to_string(),
)
.unwrap();
// Libraries needed by libafl on Windows
#[cfg(windows)]
cc.add_link_arg("-lws2_32".into())
.unwrap()
.add_link_arg("-lBcrypt".into())
.unwrap()
.add_link_arg("-lAdvapi32".into())
.unwrap();
cc.run().unwrap();
} else {
panic!("LibAFL CC: No Arguments given");
}
}

View File

@ -0,0 +1,34 @@
use libafl_cc::{ClangWrapper, CompilerWrapper, LIB_EXT, LIB_PREFIX};
use std::env;
fn main() {
let args: Vec<String> = env::args().collect();
if args.len() > 1 {
let mut dir = env::current_exe().unwrap();
dir.pop();
let mut cc = ClangWrapper::new("clang", "clang++");
cc.is_cpp()
.from_args(&args)
.unwrap()
.add_arg("-fsanitize-coverage=trace-pc-guard".into())
.unwrap()
.add_link_arg(
dir.join(format!("{}libfuzzer_libpng.{}", LIB_PREFIX, LIB_EXT))
.display()
.to_string(),
)
.unwrap();
// Libraries needed by libafl on Windows
#[cfg(windows)]
cc.add_link_arg("-lws2_32".into())
.unwrap()
.add_link_arg("-lBcrypt".into())
.unwrap()
.add_link_arg("-lAdvapi32".into())
.unwrap();
cc.run().unwrap();
} else {
panic!("LibAFL CC: No Arguments given");
}
}

View File

@ -0,0 +1,12 @@
name: libfuzzer libpng
version: "0.1.0"
author: "Andrea Fioraldi <andreafioraldi@gmail.com>, Dominik Maier <domenukk@gmail.com>"
about: A clone of libfuzzer using libafl for a libpng harness.
args:
- cores:
short: c
long: cores
about: "spawn a client in each of the provided cores. Broker runs in the 0th core. 'all' to select all available cores. 'none' to run a client without binding to any core. eg: '1,2-4,6' selects the cores 1,2,3,4,6."
value_name: CORES
required: true
takes_value: true

View File

@ -0,0 +1,178 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for libpng.
//! In this example, you will see the use of the `launcher` feature.
//! The `launcher` will spawn new processes for each cpu core.
use clap::{load_yaml, App};
use core::time::Duration;
use std::{env, path::PathBuf};
use libafl::{
bolts::{
current_nanos,
launcher::Launcher,
os::parse_core_bind_arg,
rands::StdRand,
shmem::{ShMemProvider, StdShMemProvider},
tuples::tuple_list,
},
corpus::{
Corpus, InMemoryCorpus, IndexesLenTimeMinimizerCorpusScheduler, OnDiskCorpus,
QueueCorpusScheduler,
},
executors::{inprocess::InProcessExecutor, ExitKind, TimeoutExecutor},
feedback_or,
feedbacks::{CrashFeedback, MapFeedbackState, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
fuzzer::{Fuzzer, StdFuzzer},
mutators::scheduled::{havoc_mutations, StdScheduledMutator},
mutators::token_mutations::Tokens,
observers::{HitcountsMapObserver, StdMapObserver, TimeObserver},
stages::mutational::StdMutationalStage,
state::{HasCorpus, HasMetadata, StdState},
stats::SimpleStats,
};
use libafl_targets::{libfuzzer_initialize, libfuzzer_test_one_input, EDGES_MAP, MAX_EDGES_NUM};
/// The main fn, `no_mangle` as it is a C main
#[no_mangle]
pub fn main() {
// Registry the metadata types used in this fuzzer
// Needed only on no_std
//RegistryBuilder::register::<Tokens>();
let yaml = load_yaml!("clap-config.yaml");
let matches = App::from(yaml).get_matches();
let broker_port = 1337;
let cores = parse_core_bind_arg(&matches.value_of("cores").unwrap())
.expect("No valid core count given!");
println!(
"Workdir: {:?}",
env::current_dir().unwrap().to_string_lossy().to_string()
);
#[cfg(target_os = "android")]
AshmemService::start().expect("Failed to start Ashmem service");
let shmem_provider = StdShMemProvider::new().expect("Failed to init shared memory");
let stats_closure = |s| println!("{}", s);
let stats = SimpleStats::new(stats_closure);
let mut client_init_stats = || Ok(SimpleStats::new(stats_closure));
let mut run_client = |state: Option<StdState<_, _, _, _, _>>, mut restarting_mgr| {
let corpus_dirs = &[PathBuf::from("./corpus")];
let objective_dir = PathBuf::from("./crashes");
// Create an observation channel using the coverage map
let edges = unsafe { &mut EDGES_MAP[0..MAX_EDGES_NUM] };
let edges_observer = HitcountsMapObserver::new(StdMapObserver::new("edges", edges));
// Create an observation channel to keep track of the execution time
let time_observer = TimeObserver::new("time");
// The state of the edges feedback.
let feedback_state = MapFeedbackState::with_observer(&edges_observer);
// Feedback to rate the interestingness of an input
// This one is composed by two Feedbacks in OR
let feedback = feedback_or!(
// New maximization map feedback linked to the edges observer and the feedback state
MaxMapFeedback::new_tracking(&feedback_state, &edges_observer, true, false),
// Time feedback, this one does not need a feedback state
TimeFeedback::new_with_observer(&time_observer)
);
// A feedback to choose if an input is a solution or not
let objective = feedback_or!(CrashFeedback::new(), TimeoutFeedback::new());
// If not restarting, create a State from scratch
let mut state = state.unwrap_or_else(|| {
StdState::new(
// RNG
StdRand::with_seed(current_nanos()),
// Corpus that will be evolved, we keep it in memory for performance
InMemoryCorpus::new(),
// Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new(objective_dir).unwrap(),
// States of the feedbacks.
// They are the data related to the feedbacks that you want to persist in the State.
tuple_list!(feedback_state),
)
});
println!("We're a client, let's fuzz :)");
// Create a PNG dictionary if not existing
if state.metadata().get::<Tokens>().is_none() {
state.add_metadata(Tokens::new(vec![
vec![137, 80, 78, 71, 13, 10, 26, 10], // PNG header
"IHDR".as_bytes().to_vec(),
"IDAT".as_bytes().to_vec(),
"PLTE".as_bytes().to_vec(),
"IEND".as_bytes().to_vec(),
]));
}
// Setup a basic mutator with a mutational stage
let mutator = StdScheduledMutator::new(havoc_mutations());
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
// A minimization+queue policy to get testcasess from the corpus
let scheduler = IndexesLenTimeMinimizerCorpusScheduler::new(QueueCorpusScheduler::new());
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// The wrapped harness function, calling out to the LLVM-style harness
let mut harness = |buf: &[u8]| {
libfuzzer_test_one_input(buf);
ExitKind::Ok
};
// Create the executor for an in-process function with one observer for edge coverage and one for the execution time
let mut executor = TimeoutExecutor::new(
InProcessExecutor::new(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut restarting_mgr,
)?,
// 10 seconds timeout
Duration::new(10, 0),
);
// The actual target run starts here.
// Call LLVMFUzzerInitialize() if present.
let args: Vec<String> = env::args().collect();
if libfuzzer_initialize(&args) == -1 {
println!("Warning: LLVMFuzzerInitialize failed with -1")
}
// In case the corpus is empty (on first run), reset
if state.corpus().count() < 1 {
state
.load_initial_inputs(&mut fuzzer, &mut executor, &mut restarting_mgr, corpus_dirs)
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", corpus_dirs));
println!("We imported {} inputs from disk.", state.corpus().count());
}
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut restarting_mgr)?;
Ok(())
};
Launcher::builder()
.shmem_provider(shmem_provider)
.stats(stats)
.client_init_stats(&mut client_init_stats)
.run_client(&mut run_client)
.cores(&cores)
.broker_port(broker_port)
.stdout_file(Some("/dev/null"))
.build()
.launch()
.expect("Launcher failed");
}

View File

@ -1,6 +1,6 @@
[package]
name = "libfuzzer_reachability"
version = "0.2.0"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
edition = "2018"

View File

@ -4,7 +4,7 @@
use std::{env, path::PathBuf};
use libafl::{
bolts::tuples::tuple_list,
bolts::{current_nanos, rands::StdRand, tuples::tuple_list},
corpus::{Corpus, InMemoryCorpus, OnDiskCorpus, RandCorpusScheduler},
events::{setup_restarting_mgr_std, EventRestarter},
executors::{inprocess::InProcessExecutor, ExitKind},
@ -16,7 +16,6 @@ use libafl::{
stages::mutational::StdMutationalStage,
state::{HasCorpus, HasMetadata, StdState},
stats::SimpleStats,
utils::{current_nanos, StdRand},
Error,
};
use libafl_targets::{libfuzzer_initialize, libfuzzer_test_one_input, EDGES_MAP, MAX_EDGES_NUM};
@ -27,7 +26,7 @@ extern "C" {
static __libafl_target_list: *mut usize;
}
/// The main fn, no_mangle as it is a C main
/// The main fn, `no_mangle` as it is a C main
#[no_mangle]
pub fn main() {
// Registry the metadata types used in this fuzzer
@ -39,7 +38,7 @@ pub fn main() {
env::current_dir().unwrap().to_string_lossy().to_string()
);
fuzz(
vec![PathBuf::from("./corpus")],
&[PathBuf::from("./corpus")],
PathBuf::from("./crashes"),
1337,
)
@ -47,7 +46,7 @@ pub fn main() {
}
/// The actual fuzzer
fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
fn fuzz(corpus_dirs: &[PathBuf], objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
// 'While the stats are state, they are usually used in the broker - which is likely never restarted
let stats = SimpleStats::new(|s| println!("{}", s));
@ -150,10 +149,7 @@ fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) ->
&mut restarting_mgr,
&corpus_dirs,
)
.expect(&format!(
"Failed to load initial corpus at {:?}",
&corpus_dirs
));
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &corpus_dirs));
println!("We imported {} inputs from disk.", state.corpus().count());
}

View File

@ -1,6 +1,6 @@
[package]
name = "libfuzzer_stb_image"
version = "0.2.0"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
edition = "2018"
build = "build.rs"

View File

@ -1,10 +1,10 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for stb_image.
//! The example harness is built for `stb_image`.
use std::{env, path::PathBuf};
use libafl::{
bolts::tuples::tuple_list,
bolts::{current_nanos, rands::StdRand, tuples::tuple_list},
corpus::{
Corpus, InMemoryCorpus, IndexesLenTimeMinimizerCorpusScheduler, OnDiskCorpus,
QueueCorpusScheduler,
@ -20,7 +20,6 @@ use libafl::{
stages::mutational::StdMutationalStage,
state::{HasCorpus, HasMetadata, StdState},
stats::SimpleStats,
utils::{current_nanos, StdRand},
Error,
};
@ -36,7 +35,7 @@ pub fn main() {
env::current_dir().unwrap().to_string_lossy().to_string()
);
fuzz(
vec![PathBuf::from("./corpus")],
&[PathBuf::from("./corpus")],
PathBuf::from("./crashes"),
1337,
)
@ -44,7 +43,7 @@ pub fn main() {
}
/// The actual fuzzer
fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
fn fuzz(corpus_dirs: &[PathBuf], objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
// 'While the stats are state, they are usually used in the broker - which is likely never restarted
let stats = SimpleStats::new(|s| println!("{}", s));
@ -154,10 +153,7 @@ fn fuzz(corpus_dirs: Vec<PathBuf>, objective_dir: PathBuf, broker_port: u16) ->
&mut restarting_mgr,
&corpus_dirs,
)
.expect(&format!(
"Failed to load initial corpus at {:?}",
&corpus_dirs
));
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &corpus_dirs));
println!("We imported {} inputs from disk.", state.corpus().count());
}

View File

@ -1,6 +1,6 @@
[package]
name = "libafl"
version = "0.2.1"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>"]
description = "Slot your own fuzzers together and extend their features using Rust"
documentation = "https://docs.rs/libafl"
@ -37,7 +37,7 @@ harness = false
[features]
default = ["std", "anymap_debug", "derive", "llmp_compression"]
std = [] # print, sharedmap, ... support
std = [] # print, env, launcher ... support
anymap_debug = ["serde_json"] # uses serde_json to Debug the anymap trait. Disable for smaller footprint.
derive = ["libafl_derive"] # provide derive(SerdeAny) macro.
llmp_bind_public = [] # If set, llmp will bind to 0.0.0.0, allowing cross-device communication. Binds to localhost by default.
@ -52,6 +52,7 @@ path = "./examples/llmp_test/main.rs"
required-features = ["std"]
[dependencies]
libafl_derive = { optional = true, path = "../libafl_derive", version = "0.3.0" }
tuple_list = "0.1.2"
hashbrown = { version = "0.9", features = ["serde", "ahash-compile-time-rng"] } # A faster hashmap, nostd compatible
num = "0.4.0"
@ -61,11 +62,12 @@ erased-serde = "0.3.12"
postcard = { version = "0.5.1", features = ["alloc"] } # no_std compatible serde serialization fromat
static_assertions = "1.1.0"
ctor = "0.1.20"
libafl_derive = { optional = true, path = "../libafl_derive", version = "0.2.1" }
serde_json = { version = "1.0", optional = true, default-features = false, features = ["alloc"] } # an easy way to debug print SerdeAnyMap
compression = { version = "0.1.5" }
core_affinity = { version = "0.5", git = "https://github.com/s1341/core_affinity_rs" }
num_enum = "0.5.1"
hostname = "^0.3" # Is there really no gethostname in the stdlib?
typed-builder = "0.9.0"
[target.'cfg(target_os = "android")'.dependencies]
backtrace = { version = "0.3", optional = true, default-features = false, features = ["std", "libbacktrace"] } # for llmp_debug

View File

@ -7,7 +7,7 @@ use xxhash_rust::const_xxh3;
use xxhash_rust::xxh3;
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use libafl::utils::{Rand, StdRand};
use libafl::bolts::rands::{Rand, StdRand};
fn criterion_benchmark(c: &mut Criterion) {
let mut rand = StdRand::with_seed(0);

View File

@ -1,7 +1,7 @@
//! Compare the speed of rand implementations
use criterion::{black_box, criterion_group, criterion_main, Criterion};
use libafl::utils::{
use libafl::bolts::rands::{
Lehmer64Rand, Rand, RomuDuoJrRand, RomuTrioRand, XorShift64Rand, Xoshiro256StarRand,
};

View File

@ -2,8 +2,10 @@
use rustc_version::{version_meta, Channel};
#[allow(clippy::ptr_arg, clippy::upper_case_acronyms)]
fn main() {
#[cfg(target_os = "windows")]
#[allow(clippy::ptr_arg, clippy::upper_case_acronyms)]
windows::build!(
windows::win32::system_services::{HANDLE, BOOL, PAGE_TYPE, PSTR, ExitProcess},
windows::win32::windows_programming::CloseHandle,

View File

@ -19,7 +19,7 @@ impl GzipCompressor {
/// When given a `threshold` of `0`, the `GzipCompressor` will always compress.
#[must_use]
pub fn new(threshold: usize) -> Self {
GzipCompressor { threshold }
Self { threshold }
}
}

View File

@ -0,0 +1,244 @@
#[cfg(feature = "std")]
use serde::de::DeserializeOwned;
#[cfg(feature = "std")]
use crate::{
bolts::shmem::ShMemProvider,
events::{LlmpRestartingEventManager, ManagerKind, RestartingMgr},
inputs::Input,
observers::ObserversTuple,
stats::Stats,
Error,
};
#[cfg(all(windows, feature = "std"))]
use crate::bolts::os::startable_self;
#[cfg(all(unix, feature = "std"))]
use crate::bolts::os::{dup2, fork, ForkResult};
#[cfg(all(unix, feature = "std"))]
use std::{fs::File, os::unix::io::AsRawFd};
#[cfg(feature = "std")]
use std::net::SocketAddr;
#[cfg(all(windows, feature = "std"))]
use std::process::Stdio;
#[cfg(all(windows, feature = "std"))]
use core_affinity::CoreId;
#[cfg(feature = "std")]
use typed_builder::TypedBuilder;
/// The Launcher client callback type reference
#[cfg(feature = "std")]
pub type LauncherClientFnRef<'a, I, OT, S, SP, ST> =
&'a mut dyn FnMut(Option<S>, LlmpRestartingEventManager<I, OT, S, SP, ST>) -> Result<(), Error>;
/// Provides a Launcher, which can be used to launch a fuzzing run on a specified list of cores
#[cfg(feature = "std")]
#[derive(TypedBuilder)]
#[allow(clippy::type_complexity)]
pub struct Launcher<'a, I, OT, S, SP, ST>
where
I: Input,
ST: Stats,
SP: ShMemProvider + 'static,
OT: ObserversTuple,
S: DeserializeOwned,
{
/// The ShmemProvider to use
shmem_provider: SP,
/// The stats instance to use
stats: ST,
/// A closure or function which generates stats instances for newly spawned clients
client_init_stats: &'a mut dyn FnMut() -> Result<ST, Error>,
/// The 'main' function to run for each client forked. This probably shouldn't return
run_client: LauncherClientFnRef<'a, I, OT, S, SP, ST>,
/// The broker port to use
#[builder(default = 1337_u16)]
broker_port: u16,
/// The list of cores to run on
cores: &'a [usize],
/// A file name to write all client output to
#[builder(default = None)]
stdout_file: Option<&'a str>,
/// The `ip:port` address of another broker to connect our new broker to for multi-machine
/// clusters.
#[builder(default = None)]
remote_broker_addr: Option<SocketAddr>,
}
#[cfg(feature = "std")]
impl<'a, I, OT, S, SP, ST> Launcher<'a, I, OT, S, SP, ST>
where
I: Input,
OT: ObserversTuple,
ST: Stats + Clone,
SP: ShMemProvider + 'static,
S: DeserializeOwned,
{
/// Launch the broker and the clients and fuzz
#[cfg(all(unix, feature = "std"))]
#[allow(clippy::similar_names)]
pub fn launch(&mut self) -> Result<(), Error> {
let core_ids = core_affinity::get_core_ids().unwrap();
let num_cores = core_ids.len();
let mut handles = vec![];
println!("spawning on cores: {:?}", self.cores);
let file = self
.stdout_file
.map(|filename| File::create(filename).unwrap());
//spawn clients
for (id, bind_to) in core_ids.iter().enumerate().take(num_cores) {
if self.cores.iter().any(|&x| x == id) {
self.shmem_provider.pre_fork()?;
match unsafe { fork() }? {
ForkResult::Parent(child) => {
self.shmem_provider.post_fork(false)?;
handles.push(child.pid);
#[cfg(feature = "std")]
println!("child spawned and bound to core {}", id);
}
ForkResult::Child => {
self.shmem_provider.post_fork(true)?;
#[cfg(feature = "std")]
std::thread::sleep(std::time::Duration::from_secs((id + 1) as u64));
#[cfg(feature = "std")]
if file.is_some() {
dup2(file.as_ref().unwrap().as_raw_fd(), libc::STDOUT_FILENO)?;
dup2(file.as_ref().unwrap().as_raw_fd(), libc::STDERR_FILENO)?;
}
//fuzzer client. keeps retrying the connection to broker till the broker starts
let stats = (self.client_init_stats)()?;
let (state, mgr) = RestartingMgr::builder()
.shmem_provider(self.shmem_provider.clone())
.stats(stats)
.broker_port(self.broker_port)
.kind(ManagerKind::Client {
cpu_core: Some(*bind_to),
})
.build()
.launch()?;
(self.run_client)(state, mgr)?;
break;
}
};
}
}
#[cfg(feature = "std")]
println!("I am broker!!.");
RestartingMgr::<I, OT, S, SP, ST>::builder()
.shmem_provider(self.shmem_provider.clone())
.stats(self.stats.clone())
.broker_port(self.broker_port)
.kind(ManagerKind::Broker)
.remote_broker_addr(self.remote_broker_addr)
.build()
.launch()?;
//broker exited. kill all clients.
for handle in &handles {
unsafe {
libc::kill(*handle, libc::SIGINT);
}
}
Ok(())
}
/// Launch the broker and the clients and fuzz
#[cfg(all(windows, feature = "std"))]
#[allow(unused_mut)]
pub fn launch(&mut self) -> Result<(), Error> {
let is_client = std::env::var(_AFL_LAUNCHER_CLIENT);
let mut handles = match is_client {
Ok(core_conf) => {
//todo: silence stdout and stderr for clients
// the actual client. do the fuzzing
let stats = (self.client_init_stats)()?;
let (state, mgr) = RestartingMgr::<I, OT, S, SP, ST>::builder()
.shmem_provider(self.shmem_provider.clone())
.stats(stats)
.broker_port(self.broker_port)
.kind(ManagerKind::Client {
cpu_core: Some(CoreId {
id: core_conf.parse()?,
}),
})
.build()
.launch()?;
(self.run_client)(state, mgr)?;
unreachable!("Fuzzer client code should never get here!");
}
Err(std::env::VarError::NotPresent) => {
// I am a broker
// before going to the broker loop, spawn n clients
if self.stdout_file.is_some() {
println!("Child process file stdio is not supported on Windows yet. Dumping to stdout instead...");
}
let core_ids = core_affinity::get_core_ids().unwrap();
let num_cores = core_ids.len();
let mut handles = vec![];
println!("spawning on cores: {:?}", self.cores);
//spawn clients
for (id, _) in core_ids.iter().enumerate().take(num_cores) {
if self.cores.iter().any(|&x| x == id) {
for id in 0..num_cores {
let stdio = if self.stdout_file.is_some() {
Stdio::inherit()
} else {
Stdio::null()
};
if self.cores.iter().any(|&x| x == id) {
std::env::set_var(_AFL_LAUNCHER_CLIENT, id.to_string());
let child = startable_self()?.stdout(stdio).spawn()?;
handles.push(child);
}
}
}
}
handles
}
Err(_) => panic!("Env variables are broken, received non-unicode!"),
};
#[cfg(feature = "std")]
println!("I am broker!!.");
RestartingMgr::<I, OT, S, SP, ST>::builder()
.shmem_provider(self.shmem_provider.clone())
.stats(self.stats.clone())
.broker_port(self.broker_port)
.kind(ManagerKind::Broker)
.remote_broker_addr(self.remote_broker_addr)
.build()
.launch()?;
//broker exited. kill all clients.
for handle in &mut handles {
handle.kill()?;
}
Ok(())
}
}
const _AFL_LAUNCHER_CLIENT: &str = &"AFL_LAUNCHER_CLIENT";

View File

@ -70,11 +70,12 @@ use core::{
time::Duration,
};
use serde::{Deserialize, Serialize};
#[cfg(feature = "std")]
use std::{
convert::TryInto,
env,
io::{Read, Write},
io::{ErrorKind, Read, Write},
net::{SocketAddr, TcpListener, TcpStream, ToSocketAddrs},
sync::mpsc::channel,
thread,
@ -91,6 +92,10 @@ use crate::{
};
#[cfg(unix)]
use libc::ucontext_t;
#[cfg(all(unix, feature = "std"))]
use nix::sys::socket::{self, sockopt::ReusePort};
#[cfg(all(unix, feature = "std"))]
use std::os::unix::io::AsRawFd;
/// We'll start off with 256 megabyte maps per fuzzer client
#[cfg(not(feature = "llmp_small_maps"))]
@ -323,6 +328,19 @@ fn msg_offset_from_env(env_name: &str) -> Result<Option<u64>, Error> {
})
}
/// Bind to a tcp port on the [`_LLMP_BIND_ADDR`] (local, or global)
/// on a given `port`.
/// Will set `SO_REUSEPORT` on unix.
#[cfg(feature = "std")]
fn tcp_bind(port: u16) -> Result<TcpListener, Error> {
let listener = TcpListener::bind((_LLMP_BIND_ADDR, port))?;
#[cfg(unix)]
socket::setsockopt(listener.as_raw_fd(), ReusePort, &true)?;
Ok(listener)
}
/// Send one message as `u32` len and `[u8;len]` bytes
#[cfg(feature = "std")]
fn send_tcp_msg<T>(stream: &mut TcpStream, msg: &T) -> Result<(), Error>
@ -560,28 +578,48 @@ where
#[cfg(feature = "std")]
/// Creates either a broker, if the tcp port is not bound, or a client, connected to this port.
pub fn on_port(shmem_provider: SP, port: u16) -> Result<Self, Error> {
match TcpListener::bind(format!("{}:{}", _LLMP_BIND_ADDR, port)) {
match tcp_bind(port) {
Ok(listener) => {
// We got the port. We are the broker! :)
dbg!("We're the broker");
let mut broker = LlmpBroker::new(shmem_provider)?;
let _listener_thread = broker.launch_listener(Listener::Tcp(listener))?;
Ok(LlmpConnection::IsBroker { broker })
}
Err(e) => {
println!("error: {:?}", e);
match e.kind() {
std::io::ErrorKind::AddrInUse => {
Err(Error::File(e)) if e.kind() == ErrorKind::AddrInUse => {
// We are the client :)
dbg!("We're the client", e);
println!(
"We're the client (internal port already bound by broker, {:#?})",
e
);
Ok(LlmpConnection::IsClient {
client: LlmpClient::create_attach_to_tcp(shmem_provider, port)?,
})
}
_ => Err(Error::File(e)),
Err(e) => Err(dbg!(e)),
}
}
/// Creates a new broker on the given port
#[cfg(feature = "std")]
pub fn broker_on_port(shmem_provider: SP, port: u16) -> Result<Self, Error> {
match tcp_bind(port) {
Ok(listener) => {
let mut broker = LlmpBroker::new(shmem_provider)?;
let _listener_thread = broker.launch_listener(Listener::Tcp(listener))?;
Ok(LlmpConnection::IsBroker { broker })
}
Err(e) => Err(e),
}
}
/// Creates a new client on the given port
#[cfg(feature = "std")]
pub fn client_on_port(shmem_provider: SP, port: u16) -> Result<Self, Error> {
Ok(LlmpConnection::IsClient {
client: LlmpClient::create_attach_to_tcp(shmem_provider, port)?,
})
}
/// Describe this in a reproducable fashion, if it's a client
@ -1671,7 +1709,7 @@ where
// TODO: handle broker_ids properly/at all.
let map_description = Self::b2b_thread_on(
stream,
&self.shmem_provider,
&mut self.shmem_provider,
self.llmp_clients.len() as ClientId,
&self.llmp_out.out_maps.first().unwrap().shmem.description(),
)?;
@ -1787,7 +1825,7 @@ where
/// Does so on the given port.
#[cfg(feature = "std")]
pub fn launch_tcp_listener_on(&mut self, port: u16) -> Result<thread::JoinHandle<()>, Error> {
let listener = TcpListener::bind(format!("{}:{}", _LLMP_BIND_ADDR, port))?;
let listener = tcp_bind(port)?;
// accept connections and process them, spawning a new thread for each one
println!("Server listening on port {}", port);
self.launch_listener(Listener::Tcp(listener))
@ -1822,11 +1860,13 @@ where
#[allow(clippy::let_and_return)]
fn b2b_thread_on(
mut stream: TcpStream,
shmem_provider: &SP,
shmem_provider: &mut SP,
b2b_client_id: ClientId,
broker_map_description: &ShMemDescription,
) -> Result<ShMemDescription, Error> {
let broker_map_description = *broker_map_description;
shmem_provider.pre_fork()?;
let mut shmem_provider_clone = shmem_provider.clone();
// A channel to get the new "client's" sharedmap id from
@ -1835,7 +1875,7 @@ where
// (For now) the thread remote broker 2 broker just acts like a "normal" llmp client, except it proxies all messages to the attached socket, in both directions.
thread::spawn(move || {
// as always, call post_fork to potentially reconnect the provider (for threaded/forked use)
shmem_provider_clone.post_fork();
shmem_provider_clone.post_fork(true).unwrap();
#[cfg(fature = "llmp_debug")]
println!("B2b: Spawned proxy thread");
@ -1927,6 +1967,8 @@ where
}
});
shmem_provider.post_fork(false)?;
let ret = recv.recv().map_err(|_| {
Error::Unknown("Error launching background thread for b2b communcation".to_string())
});
@ -1944,7 +1986,7 @@ where
request: &TcpRequest,
current_client_id: &mut u32,
sender: &mut LlmpSender<SP>,
shmem_provider: &SP,
shmem_provider: &mut SP,
broker_map_description: &ShMemDescription,
) {
match request {
@ -2024,11 +2066,12 @@ where
let tcp_out_map_description = tcp_out_map.shmem.description();
self.register_client(tcp_out_map);
self.shmem_provider.pre_fork()?;
let mut shmem_provider_clone = self.shmem_provider.clone();
Ok(thread::spawn(move || {
let ret = thread::spawn(move || {
// Call `post_fork` (even though this is not forked) so we get a new connection to the cloned `ShMemServer` if we are using a `ServedShMemProvider`
shmem_provider_clone.post_fork();
shmem_provider_clone.post_fork(true).unwrap();
let mut current_client_id = llmp_tcp_id + 1;
@ -2080,7 +2123,7 @@ where
&req,
&mut current_client_id,
&mut tcp_incoming_sender,
&shmem_provider_clone,
&mut shmem_provider_clone,
&broker_map_description,
);
}
@ -2089,7 +2132,10 @@ where
}
};
}
}))
});
self.shmem_provider.post_fork(false)?;
Ok(ret)
}
/// broker broadcast to its own page for all others to read */
@ -2411,7 +2457,25 @@ where
#[cfg(feature = "std")]
/// Create a [`LlmpClient`], getting the ID from a given port
pub fn create_attach_to_tcp(mut shmem_provider: SP, port: u16) -> Result<Self, Error> {
let mut stream = TcpStream::connect(format!("{}:{}", _LLMP_BIND_ADDR, port))?;
let mut stream = match TcpStream::connect(format!("{}:{}", _LLMP_BIND_ADDR, port)) {
Ok(stream) => stream,
Err(e) => {
match e.kind() {
std::io::ErrorKind::ConnectionRefused => {
//connection refused. loop till the broker is up
loop {
match TcpStream::connect(format!("{}:{}", _LLMP_BIND_ADDR, port)) {
Ok(stream) => break stream,
Err(_) => {
dbg!("Connection Refused.. Retrying");
}
}
}
}
_ => return Err(Error::IllegalState(e.to_string())),
}
}
};
println!("Connected to port {}", port);
let broker_map_description = if let TcpResponse::BrokerConnectHello {

View File

@ -1,13 +1,55 @@
//! Bolts are no conceptual fuzzing elements, but they keep libafl-based fuzzers together.
pub mod bindings;
pub mod launcher;
pub mod llmp;
pub mod os;
pub mod ownedref;
pub mod rands;
pub mod serdeany;
pub mod shmem;
pub mod tuples;
#[cfg(feature = "llmp_compression")]
pub mod compress;
pub mod llmp;
pub mod os;
pub mod ownedref;
pub mod serdeany;
pub mod shmem;
pub mod tuples;
use core::time;
#[cfg(feature = "std")]
use std::time::{SystemTime, UNIX_EPOCH};
/// Can be converted to a slice
pub trait AsSlice<T> {
/// Convert to a slice
fn as_slice(&self) -> &[T];
}
/// Current time
#[cfg(feature = "std")]
#[must_use]
#[inline]
pub fn current_time() -> time::Duration {
SystemTime::now().duration_since(UNIX_EPOCH).unwrap()
}
/// Current time (fixed fallback for no_std)
#[cfg(not(feature = "std"))]
#[inline]
pub fn current_time() -> time::Duration {
// We may not have a rt clock available.
// TODO: Make it somehow plugin-able
time::Duration::from_millis(1)
}
/// Gets current nanoseconds since [`UNIX_EPOCH`]
#[must_use]
#[inline]
pub fn current_nanos() -> u64 {
current_time().as_nanos() as u64
}
/// Gets current milliseconds since [`UNIX_EPOCH`]
#[must_use]
#[inline]
pub fn current_milliseconds() -> u64 {
current_time().as_millis() as u64
}

View File

@ -76,8 +76,8 @@ impl ShMem for ServedShMem {
impl ServedShMemProvider {
/// Send a request to the server, and wait for a response
#[allow(clippy::similar_names)] // id and fd
fn send_receive(&mut self, request: AshmemRequest) -> (i32, i32) {
let body = postcard::to_allocvec(&request).unwrap();
fn send_receive(&mut self, request: AshmemRequest) -> Result<(i32, i32), Error> {
let body = postcard::to_allocvec(&request)?;
let header = (body.len() as u32).to_be_bytes();
let mut message = header.to_vec();
@ -95,8 +95,8 @@ impl ServedShMemProvider {
let server_id = ShMemId::from_slice(&shm_slice);
let server_id_str = server_id.to_string();
let server_fd: i32 = server_id_str.parse().unwrap();
(server_fd, fd_buf[0])
let server_fd: i32 = server_id_str.parse()?;
Ok((server_fd, fd_buf[0]))
}
}
@ -118,18 +118,16 @@ impl ShMemProvider for ServedShMemProvider {
/// Connect to the server and return a new [`ServedShMemProvider`]
fn new() -> Result<Self, Error> {
let mut res = Self {
stream: UnixStream::connect_to_unix_addr(
&UnixSocketAddr::new(ASHMEM_SERVER_NAME).unwrap(),
)?,
stream: UnixStream::connect_to_unix_addr(&UnixSocketAddr::new(ASHMEM_SERVER_NAME)?)?,
inner: AshmemShMemProvider::new()?,
id: -1,
};
let (id, _) = res.send_receive(AshmemRequest::Hello(None));
let (id, _) = res.send_receive(AshmemRequest::Hello(None))?;
res.id = id;
Ok(res)
}
fn new_map(&mut self, map_size: usize) -> Result<Self::Mem, crate::Error> {
let (server_fd, client_fd) = self.send_receive(AshmemRequest::NewMap(map_size));
let (server_fd, client_fd) = self.send_receive(AshmemRequest::NewMap(map_size))?;
Ok(ServedShMem {
inner: ManuallyDrop::new(
@ -145,7 +143,7 @@ impl ShMemProvider for ServedShMemProvider {
let server_id_str = parts.get(0).unwrap();
let (server_fd, client_fd) = self.send_receive(AshmemRequest::ExistingMap(
ShMemDescription::from_string_and_size(server_id_str, size),
));
))?;
Ok(ServedShMem {
inner: ManuallyDrop::new(
self.inner
@ -155,16 +153,21 @@ impl ShMemProvider for ServedShMemProvider {
})
}
fn post_fork(&mut self) {
fn post_fork(&mut self, is_child: bool) -> Result<(), Error> {
if is_child {
// After fork, the child needs to reconnect as to not share the fds with the parent.
self.stream =
UnixStream::connect_to_unix_addr(&UnixSocketAddr::new(ASHMEM_SERVER_NAME).unwrap())
.expect("Unable to reconnect to the ashmem service");
let (id, _) = self.send_receive(AshmemRequest::Hello(Some(self.id)));
UnixStream::connect_to_unix_addr(&UnixSocketAddr::new(ASHMEM_SERVER_NAME)?)?;
let (id, _) = self.send_receive(AshmemRequest::Hello(Some(self.id)))?;
self.id = id;
}
Ok(())
}
fn release_map(&mut self, map: &mut Self::Mem) {
let (refcount, _) = self.send_receive(AshmemRequest::Deregister(map.server_fd));
let (refcount, _) = self
.send_receive(AshmemRequest::Deregister(map.server_fd))
.expect("Could not communicate to AshMem server!");
if refcount == 0 {
unsafe {
ManuallyDrop::drop(&mut map.inner);

View File

@ -1,9 +1,200 @@
//! Operating System specific abstractions
use alloc::vec::Vec;
#[cfg(any(unix, all(windows, feature = "std")))]
use crate::Error;
#[cfg(feature = "std")]
use std::{env, process::Command};
#[cfg(all(unix, feature = "std"))]
pub mod ashmem_server;
#[cfg(unix)]
pub mod unix_signals;
#[cfg(windows)]
#[cfg(unix)]
pub mod pipes;
#[cfg(all(unix, feature = "std"))]
use std::{ffi::CString, fs::File};
#[cfg(all(windows, feature = "std"))]
pub mod windows_exceptions;
#[cfg(unix)]
use libc::pid_t;
/// Child Process Handle
#[cfg(unix)]
pub struct ChildHandle {
pub pid: pid_t,
}
#[cfg(unix)]
impl ChildHandle {
/// Block until the child exited and the status code becomes available
#[must_use]
pub fn status(&self) -> i32 {
let mut status = -1;
unsafe {
libc::waitpid(self.pid, &mut status, 0);
}
status
}
}
/// The `ForkResult` (result of a fork)
#[cfg(unix)]
pub enum ForkResult {
/// The fork finished, we are the parent process.
/// The child has the handle `ChildHandle`.
Parent(ChildHandle),
/// The fork finished, we are the child process.
Child,
}
/// Unix has forks.
/// # Safety
/// A Normal fork. Runs on in two processes. Should be memory safe in general.
#[cfg(unix)]
pub unsafe fn fork() -> Result<ForkResult, Error> {
match libc::fork() {
pid if pid > 0 => Ok(ForkResult::Parent(ChildHandle { pid })),
pid if pid < 0 => {
// Getting errno from rust is hard, we'll just let the libc print to stderr for now.
// In any case, this should usually not happen.
#[cfg(feature = "std")]
{
let err_str = CString::new("Fork failed").unwrap();
libc::perror(err_str.as_ptr());
}
Err(Error::Unknown(format!("Fork failed ({})", pid)))
}
_ => Ok(ForkResult::Child),
}
}
/// Executes the current process from the beginning, as subprocess.
/// use `start_self.status()?` to wait for the child
#[cfg(feature = "std")]
pub fn startable_self() -> Result<Command, Error> {
let mut startable = Command::new(env::current_exe()?);
startable
.current_dir(env::current_dir()?)
.args(env::args().skip(1));
Ok(startable)
}
/// Allows one to walk the mappings in /proc/self/maps, caling a callback function for each
/// mapping.
/// If the callback returns true, we stop the walk.
#[cfg(all(feature = "std", any(target_os = "linux", target_os = "android")))]
pub fn walk_self_maps(visitor: &mut dyn FnMut(usize, usize, String, String) -> bool) {
use regex::Regex;
use std::io::{BufRead, BufReader};
let re = Regex::new(r"^(?P<start>[0-9a-f]{8,16})-(?P<end>[0-9a-f]{8,16}) (?P<perm>[-rwxp]{4}) (?P<offset>[0-9a-f]{8}) [0-9a-f]+:[0-9a-f]+ [0-9]+\s+(?P<path>.*)$")
.unwrap();
let mapsfile = File::open("/proc/self/maps").expect("Unable to open /proc/self/maps");
for line in BufReader::new(mapsfile).lines() {
let line = line.unwrap();
if let Some(caps) = re.captures(&line) {
if visitor(
usize::from_str_radix(caps.name("start").unwrap().as_str(), 16).unwrap(),
usize::from_str_radix(caps.name("end").unwrap().as_str(), 16).unwrap(),
caps.name("perm").unwrap().as_str().to_string(),
caps.name("path").unwrap().as_str().to_string(),
) {
break;
};
}
}
}
/// Get the start and end address, permissions and path of the mapping containing a particular address
#[cfg(all(feature = "std", any(target_os = "linux", target_os = "android")))]
pub fn find_mapping_for_address(address: usize) -> Result<(usize, usize, String, String), Error> {
let mut result = (0, 0, "".to_string(), "".to_string());
walk_self_maps(&mut |start, end, permissions, path| {
if start <= address && address < end {
result = (start, end, permissions, path);
true
} else {
false
}
});
if result.0 == 0 {
Err(Error::Unknown(
"Couldn't find a mapping for this address".to_string(),
))
} else {
Ok(result)
}
}
/// Get the start and end address of the mapping containing with a particular path
#[cfg(all(feature = "std", any(target_os = "linux", target_os = "android")))]
#[must_use]
pub fn find_mapping_for_path(libpath: &str) -> (usize, usize) {
let mut libstart = 0;
let mut libend = 0;
walk_self_maps(&mut |start, end, _permissions, path| {
if libpath == path {
if libstart == 0 {
libstart = start;
}
libend = end;
}
false
});
(libstart, libend)
}
/// "Safe" wrapper around dup2
#[cfg(all(unix, feature = "std"))]
pub fn dup2(fd: i32, device: i32) -> Result<(), Error> {
match unsafe { libc::dup2(fd, device) } {
-1 => Err(Error::File(std::io::Error::last_os_error())),
_ => Ok(()),
}
}
/// Parses core binding args from user input
/// Returns a Vec of CPU IDs.
/// `./fuzzer --cores 1,2-4,6` -> clients run in cores 1,2,3,4,6
/// ` ./fuzzer --cores all` -> one client runs on each available core
#[must_use]
pub fn parse_core_bind_arg(args: &str) -> Option<Vec<usize>> {
let mut cores: Vec<usize> = vec![];
if args == "all" {
let num_cores = core_affinity::get_core_ids().unwrap().len();
for x in 0..num_cores {
cores.push(x);
}
} else {
let core_args: Vec<&str> = args.split(',').collect();
// ./fuzzer --cores 1,2-4,6 -> clients run in cores 1,2,3,4,6
// ./fuzzer --cores all -> one client runs in each available core
for csv in core_args {
let core_range: Vec<&str> = csv.split('-').collect();
if core_range.len() == 1 {
cores.push(core_range[0].parse::<usize>().unwrap());
} else if core_range.len() == 2 {
for x in core_range[0].parse::<usize>().unwrap()
..=(core_range[1].parse::<usize>().unwrap())
{
cores.push(x);
}
}
}
}
Some(cores)
}

View File

@ -0,0 +1,93 @@
//! Unix `pipe` wrapper for `LibAFL`
use crate::Error;
use nix::unistd::{close, pipe};
#[cfg(feature = "std")]
use nix::unistd::{read, write};
#[cfg(feature = "std")]
use std::{
io::{self, ErrorKind, Read, Write},
os::unix::io::RawFd,
};
#[cfg(not(feature = "std"))]
type RawFd = i32;
#[derive(Debug, Clone)]
pub struct Pipe {
read_end: Option<RawFd>,
write_end: Option<RawFd>,
}
impl Pipe {
pub fn new() -> Result<Self, Error> {
let (read_end, write_end) = pipe()?;
Ok(Self {
read_end: Some(read_end),
write_end: Some(write_end),
})
}
pub fn close_read_end(&mut self) {
if let Some(read_end) = self.read_end {
let _ = close(read_end);
self.read_end = None;
}
}
pub fn close_write_end(&mut self) {
if let Some(write_end) = self.write_end {
let _ = close(write_end);
self.write_end = None;
}
}
}
#[cfg(feature = "std")]
impl Read for Pipe {
/// Reads a few bytes
fn read(&mut self, buf: &mut [u8]) -> Result<usize, io::Error> {
match self.read_end {
Some(read_end) => match read(read_end, buf) {
Ok(res) => Ok(res),
Err(e) => Err(io::Error::from_raw_os_error(e.as_errno().unwrap() as i32)),
},
None => Err(io::Error::new(
ErrorKind::BrokenPipe,
"Read pipe end was already closed",
)),
}
}
}
#[cfg(feature = "std")]
impl Write for Pipe {
/// Writes a few bytes
fn write(&mut self, buf: &[u8]) -> Result<usize, io::Error> {
match self.write_end {
Some(write_end) => match write(write_end, buf) {
Ok(res) => Ok(res),
Err(e) => Err(io::Error::from_raw_os_error(e.as_errno().unwrap() as i32)),
},
None => Err(io::Error::new(
ErrorKind::BrokenPipe,
"Write pipe end was already closed",
)),
}
}
fn flush(&mut self) -> Result<(), io::Error> {
Ok(())
}
}
impl Drop for Pipe {
fn drop(&mut self) {
if let Some(read_end) = self.read_end {
let _ = close(read_end);
}
if let Some(write_end) = self.write_end {
let _ = close(write_end);
}
}
}

View File

@ -1,8 +1,10 @@
//! Exception handling for Windows
pub use crate::bolts::bindings::windows::win32::debug::EXCEPTION_POINTERS;
use crate::{bolts::bindings::windows::win32::debug::SetUnhandledExceptionFilter, Error};
pub use crate::bolts::bindings::windows::win32::debug::{
SetUnhandledExceptionFilter, EXCEPTION_POINTERS,
};
use crate::Error;
use std::os::raw::{c_long, c_void};
use alloc::vec::Vec;
use core::{
@ -13,7 +15,6 @@ use core::{
ptr::write_volatile,
sync::atomic::{compiler_fence, Ordering},
};
use std::os::raw::{c_long, c_void};
use num_enum::{IntoPrimitive, TryFromPrimitive};

View File

@ -1,28 +1,11 @@
//! Utility functions for AFL
use core::{cell::RefCell, debug_assert, fmt::Debug, time};
use core::{cell::RefCell, debug_assert, fmt::Debug};
use serde::{de::DeserializeOwned, Deserialize, Serialize};
use xxhash_rust::xxh3::xxh3_64_with_seed;
#[cfg(unix)]
use libc::pid_t;
#[cfg(all(unix, feature = "std"))]
use std::ffi::CString;
#[cfg(feature = "std")]
use std::{
env,
process::Command,
time::{SystemTime, UNIX_EPOCH},
};
use crate::bolts::current_nanos;
#[cfg(any(unix, feature = "std"))]
use crate::Error;
/// Can be converted to a slice
pub trait AsSlice<T> {
/// Convert to a slice
fn as_slice(&self) -> &[T];
}
const HASH_CONST: u64 = 0xa5b35705;
/// The standard rand implementation for `LibAFL`.
/// It is usually the right choice, with very good speed and a reasonable randomness.
@ -147,39 +130,6 @@ impl_randomseed!(Lehmer64Rand);
impl_randomseed!(RomuTrioRand);
impl_randomseed!(RomuDuoJrRand);
const HASH_CONST: u64 = 0xa5b35705;
/// Current time
#[cfg(feature = "std")]
#[must_use]
#[inline]
pub fn current_time() -> time::Duration {
SystemTime::now().duration_since(UNIX_EPOCH).unwrap()
}
/// Current time (fixed fallback for no_std)
#[cfg(not(feature = "std"))]
#[inline]
pub fn current_time() -> time::Duration {
// We may not have a rt clock available.
// TODO: Make it somehow plugin-able
time::Duration::from_millis(1)
}
/// Gets current nanoseconds since [`UNIX_EPOCH`]
#[must_use]
#[inline]
pub fn current_nanos() -> u64 {
current_time().as_nanos() as u64
}
/// Gets current milliseconds since [`UNIX_EPOCH`]
#[must_use]
#[inline]
pub fn current_milliseconds() -> u64 {
current_time().as_millis() as u64
}
/// XXH3 Based, hopefully speedy, rnd implementation
#[derive(Copy, Clone, Debug, Serialize, Deserialize)]
pub struct Xoshiro256StarRand {
@ -404,142 +354,11 @@ impl XkcdRand {
}
}
/// Child Process Handle
#[cfg(unix)]
pub struct ChildHandle {
pid: pid_t,
}
#[cfg(unix)]
impl ChildHandle {
/// Block until the child exited and the status code becomes available
#[must_use]
pub fn status(&self) -> i32 {
let mut status = -1;
unsafe {
libc::waitpid(self.pid, &mut status, 0);
}
status
}
}
/// The `ForkResult` (result of a fork)
#[cfg(unix)]
pub enum ForkResult {
/// The fork finished, we are the parent process.
/// The child has the handle `ChildHandle`.
Parent(ChildHandle),
/// The fork finished, we are the child process.
Child,
}
/// Unix has forks.
/// # Safety
/// A Normal fork. Runs on in two processes. Should be memory safe in general.
#[cfg(unix)]
pub unsafe fn fork() -> Result<ForkResult, Error> {
match libc::fork() {
pid if pid > 0 => Ok(ForkResult::Parent(ChildHandle { pid })),
pid if pid < 0 => {
// Getting errno from rust is hard, we'll just let the libc print to stderr for now.
// In any case, this should usually not happen.
#[cfg(feature = "std")]
{
let err_str = CString::new("Fork failed").unwrap();
libc::perror(err_str.as_ptr());
}
Err(Error::Unknown(format!("Fork failed ({})", pid)))
}
_ => Ok(ForkResult::Child),
}
}
/// Executes the current process from the beginning, as subprocess.
/// use `start_self.status()?` to wait for the child
#[cfg(feature = "std")]
pub fn startable_self() -> Result<Command, Error> {
let mut startable = Command::new(env::current_exe()?);
startable.current_dir(env::current_dir()?).args(env::args());
Ok(startable)
}
/// Allows one to walk the mappings in /proc/self/maps, caling a callback function for each
/// mapping.
/// If the callback returns true, we stop the walk.
#[cfg(all(feature = "std", any(target_os = "linux", target_os = "android")))]
pub fn walk_self_maps(visitor: &mut dyn FnMut(usize, usize, String, String) -> bool) {
use regex::Regex;
use std::{
fs::File,
io::{BufRead, BufReader},
};
let re = Regex::new(r"^(?P<start>[0-9a-f]{8,16})-(?P<end>[0-9a-f]{8,16}) (?P<perm>[-rwxp]{4}) (?P<offset>[0-9a-f]{8}) [0-9a-f]+:[0-9a-f]+ [0-9]+\s+(?P<path>.*)$")
.unwrap();
let mapsfile = File::open("/proc/self/maps").expect("Unable to open /proc/self/maps");
for line in BufReader::new(mapsfile).lines() {
let line = line.unwrap();
if let Some(caps) = re.captures(&line) {
if visitor(
usize::from_str_radix(caps.name("start").unwrap().as_str(), 16).unwrap(),
usize::from_str_radix(caps.name("end").unwrap().as_str(), 16).unwrap(),
caps.name("perm").unwrap().as_str().to_string(),
caps.name("path").unwrap().as_str().to_string(),
) {
break;
};
}
}
}
/// Get the start and end address, permissions and path of the mapping containing a particular address
#[cfg(all(feature = "std", any(target_os = "linux", target_os = "android")))]
pub fn find_mapping_for_address(address: usize) -> Result<(usize, usize, String, String), Error> {
let mut result = (0, 0, "".to_string(), "".to_string());
walk_self_maps(&mut |start, end, permissions, path| {
if start <= address && address < end {
result = (start, end, permissions, path);
true
} else {
false
}
});
if result.0 == 0 {
Err(Error::Unknown(
"Couldn't find a mapping for this address".to_string(),
))
} else {
Ok(result)
}
}
/// Get the start and end address of the mapping containing with a particular path
#[cfg(all(feature = "std", any(target_os = "linux", target_os = "android")))]
#[must_use]
pub fn find_mapping_for_path(libpath: &str) -> (usize, usize) {
let mut libstart = 0;
let mut libend = 0;
walk_self_maps(&mut |start, end, _permissions, path| {
if libpath == path {
if libstart == 0 {
libstart = start;
}
libend = end;
}
false
});
(libstart, libend)
}
#[cfg(test)]
mod tests {
//use xxhash_rust::xxh3::xxh3_64_with_seed;
use crate::utils::{
use crate::bolts::rands::{
Rand, RomuDuoJrRand, RomuTrioRand, StdRand, XorShift64Rand, Xoshiro256StarRand,
};
@ -564,7 +383,7 @@ mod tests {
#[cfg(feature = "std")]
#[test]
fn test_random_seed() {
use crate::utils::RandomSeed;
use crate::bolts::rands::RandomSeed;
let mut rand_fixed = StdRand::with_seed(0);
let mut rand = StdRand::new();

View File

@ -256,6 +256,12 @@ macro_rules! create_serde_registry_for_trait {
self.map.len()
}
/// Returns `true` if this map is empty.
#[must_use]
pub fn is_empty(&self) -> bool {
self.map.is_empty()
}
/// Returns if the map contains the given type.
#[must_use]
#[inline]
@ -484,6 +490,12 @@ macro_rules! create_serde_registry_for_trait {
self.map.len()
}
/// Returns `true` if this map is empty.
#[must_use]
pub fn is_empty(&self) -> bool {
self.map.is_empty()
}
/// Returns if the element with a given type is contained in this map.
#[must_use]
#[inline]

View File

@ -17,11 +17,13 @@ pub type OsShMemProvider = Win32ShMemProvider;
#[cfg(all(windows, feature = "std"))]
pub type OsShMem = Win32ShMem;
#[cfg(target_os = "android")]
use crate::Error;
#[cfg(all(target_os = "android", feature = "std"))]
use crate::bolts::os::ashmem_server::ServedShMemProvider;
#[cfg(target_os = "android")]
#[cfg(all(target_os = "android", feature = "std"))]
pub type StdShMemProvider = RcShMemProvider<ServedShMemProvider>;
#[cfg(target_os = "android")]
#[cfg(all(target_os = "android", feature = "std"))]
pub type StdShMem = RcShMem<ServedShMemProvider>;
/// The default [`ShMemProvider`] for this os.
@ -31,16 +33,17 @@ pub type StdShMemProvider = OsShMemProvider;
#[cfg(all(feature = "std", not(target_os = "android")))]
pub type StdShMem = OsShMem;
use core::fmt::Debug;
use serde::{Deserialize, Serialize};
#[cfg(feature = "std")]
use std::env;
use alloc::{rc::Rc, string::ToString};
use core::cell::RefCell;
use core::mem::ManuallyDrop;
use core::{cell::RefCell, fmt::Debug, mem::ManuallyDrop};
use crate::Error;
#[cfg(all(unix, feature = "std"))]
use crate::bolts::os::pipes::Pipe;
#[cfg(all(unix, feature = "std"))]
use std::io::{Read, Write};
/// Description of a shared map.
/// May be used to restore the map by id.
@ -191,10 +194,20 @@ pub trait ShMemProvider: Send + Clone + Default + Debug {
))
}
/// This method should be called after a fork or after cloning/a thread creation event, allowing the [`ShMem`] to
/// reset thread specific info, and potentially reconnect.
fn post_fork(&mut self) {
/// This method should be called before a fork or a thread creation event, allowing the [`ShMemProvider`] to
/// get ready for a potential reset of thread specific info, and for potential reconnects.
/// Make sure to call [`Self::post_fork()`] after threading!
fn pre_fork(&mut self) -> Result<(), Error> {
// do nothing
Ok(())
}
/// This method should be called after a fork or after cloning/a thread creation event, allowing the [`ShMemProvider`] to
/// reset thread specific info, and potentially reconnect.
/// Make sure to call [`Self::pre_fork()`] before threading!
fn post_fork(&mut self, _is_child: bool) -> Result<(), Error> {
// do nothing
Ok(())
}
/// Release the resources associated with the given [`ShMem`]
@ -243,12 +256,24 @@ impl<T: ShMemProvider> Drop for RcShMem<T> {
/// that can use internal mutability.
/// Useful if the `ShMemProvider` needs to keep local state.
#[derive(Debug, Clone)]
#[cfg(all(unix, feature = "std"))]
pub struct RcShMemProvider<T: ShMemProvider> {
/// The wrapped [`ShMemProvider`].
internal: Rc<RefCell<T>>,
/// A pipe the child uses to communicate progress to the parent after fork.
/// This prevents a potential race condition when using the [`AshmemService`].
#[cfg(unix)]
child_parent_pipe: Option<Pipe>,
#[cfg(unix)]
/// A pipe the parent uses to communicate progress to the child after fork.
/// This prevents a potential race condition when using the [`AshmemService`].
parent_child_pipe: Option<Pipe>,
}
#[cfg(all(unix, feature = "std"))]
unsafe impl<T: ShMemProvider> Send for RcShMemProvider<T> {}
#[cfg(all(unix, feature = "std"))]
impl<T> ShMemProvider for RcShMemProvider<T>
where
T: ShMemProvider + alloc::fmt::Debug,
@ -258,6 +283,8 @@ where
fn new() -> Result<Self, Error> {
Ok(Self {
internal: Rc::new(RefCell::new(T::new()?)),
child_parent_pipe: None,
parent_child_pipe: None,
})
}
@ -286,11 +313,100 @@ where
})
}
fn post_fork(&mut self) {
self.internal.borrow_mut().post_fork()
/// This method should be called before a fork or a thread creation event, allowing the [`ShMemProvider`] to
/// get ready for a potential reset of thread specific info, and for potential reconnects.
fn pre_fork(&mut self) -> Result<(), Error> {
// Set up the pipes to communicate progress over, later.
self.child_parent_pipe = Some(Pipe::new()?);
self.parent_child_pipe = Some(Pipe::new()?);
self.internal.borrow_mut().pre_fork()
}
/// After fork, make sure everything gets set up correctly internally.
fn post_fork(&mut self, is_child: bool) -> Result<(), Error> {
if is_child {
self.await_parent_done()?;
let child_shmem = self.internal.borrow_mut().clone();
self.internal = Rc::new(RefCell::new(child_shmem));
}
self.internal.borrow_mut().post_fork(is_child)?;
if is_child {
self.set_child_done()?;
} else {
self.set_parent_done()?;
self.await_child_done()?;
}
self.parent_child_pipe = None;
self.child_parent_pipe = None;
Ok(())
}
}
#[cfg(all(unix, feature = "std"))]
impl<T> RcShMemProvider<T>
where
T: ShMemProvider,
{
/// "set" the "latch"
/// (we abuse `pipes` as `semaphores`, as they don't need an additional shared mem region.)
fn pipe_set(pipe: &mut Option<Pipe>) -> Result<(), Error> {
match pipe {
Some(pipe) => {
let ok = [0u8; 4];
pipe.write_all(&ok)?;
Ok(())
}
None => Err(Error::IllegalState(
"Unexpected `None` Pipe in RcShMemProvider! Missing post_fork()?".to_string(),
)),
}
}
/// "await" the "latch"
fn pipe_await(pipe: &mut Option<Pipe>) -> Result<(), Error> {
match pipe {
Some(pipe) => {
let ok = [0u8; 4];
let mut ret = ok;
pipe.read_exact(&mut ret)?;
if ret == ok {
Ok(())
} else {
Err(Error::Unknown(format!(
"Wrong result read from pipe! Expected 0, got {:?}",
ret
)))
}
}
None => Err(Error::IllegalState(
"Unexpected `None` Pipe in RcShMemProvider! Missing post_fork()?".to_string(),
)),
}
}
/// After fork, wait for the parent to write to our pipe :)
fn await_parent_done(&mut self) -> Result<(), Error> {
Self::pipe_await(&mut self.parent_child_pipe)
}
/// After fork, inform the new child we're done
fn set_parent_done(&mut self) -> Result<(), Error> {
Self::pipe_set(&mut self.parent_child_pipe)
}
/// After fork, wait for the child to write to our pipe :)
fn await_child_done(&mut self) -> Result<(), Error> {
Self::pipe_await(&mut self.child_parent_pipe)
}
/// After fork, inform the new child we're done
fn set_child_done(&mut self) -> Result<(), Error> {
Self::pipe_set(&mut self.child_parent_pipe)
}
}
#[cfg(all(unix, feature = "std"))]
impl<T> Default for RcShMemProvider<T>
where
T: ShMemProvider + alloc::fmt::Debug,
@ -326,10 +442,10 @@ pub mod unix_shmem {
use core::{ptr, slice};
use libc::{c_int, c_long, c_uchar, c_uint, c_ulong, c_ushort, c_void};
use crate::Error;
use super::super::{ShMem, ShMemId, ShMemProvider};
use crate::{
bolts::shmem::{ShMem, ShMemId, ShMemProvider},
Error,
};
#[cfg(unix)]
#[derive(Copy, Clone)]
#[repr(C)]
@ -491,9 +607,10 @@ pub mod unix_shmem {
};
use std::ffi::CString;
use crate::Error;
use super::super::{ShMem, ShMemId, ShMemProvider};
use crate::{
bolts::shmem::{ShMem, ShMemId, ShMemProvider},
Error,
};
extern "C" {
fn ioctl(fd: c_int, request: c_long, ...) -> c_int;
@ -702,15 +819,17 @@ pub mod unix_shmem {
#[cfg(all(feature = "std", windows))]
pub mod win32_shmem {
use super::{ShMem, ShMemId, ShMemProvider};
use crate::{
bolts::bindings::{
bolts::{
bindings::{
windows::win32::system_services::{
CreateFileMappingA, MapViewOfFile, OpenFileMappingA, UnmapViewOfFile,
},
windows::win32::system_services::{BOOL, HANDLE, PAGE_TYPE, PSTR},
windows::win32::windows_programming::CloseHandle,
},
shmem::{ShMem, ShMemId, ShMemProvider},
},
Error,
};
@ -752,7 +871,7 @@ pub mod win32_shmem {
)));
}
let map = MapViewOfFile(handle, FILE_MAP_ALL_ACCESS, 0, 0, map_size) as *mut u8;
if map == ptr::null_mut() {
if map.is_null() {
return Err(Error::Unknown(format!(
"Cannot map shared memory {}",
String::from_utf8_lossy(map_str_bytes)

View File

@ -2,12 +2,11 @@
// with testcases only from a subset of the total corpus.
use crate::{
bolts::serdeany::SerdeAny,
bolts::{rands::Rand, serdeany::SerdeAny, AsSlice},
corpus::{Corpus, CorpusScheduler, Testcase},
feedbacks::MapIndexesMetadata,
inputs::{HasLen, Input},
state::{HasCorpus, HasMetadata, HasRand},
utils::{AsSlice, Rand},
Error,
};

View File

@ -25,9 +25,9 @@ use alloc::borrow::ToOwned;
use core::{cell::RefCell, marker::PhantomData};
use crate::{
bolts::rands::Rand,
inputs::Input,
state::{HasCorpus, HasRand},
utils::Rand,
Error,
};

View File

@ -55,7 +55,7 @@ where
testcase.set_filename(filename_str.into());
};
if self.meta_format.is_some() {
let filename = testcase.filename().as_ref().unwrap().to_owned() + ".metadata";
let filename = testcase.filename().as_ref().unwrap().clone() + ".metadata";
let mut file = File::create(filename)?;
let serialized = match self.meta_format.as_ref().unwrap() {

View File

@ -80,10 +80,10 @@ mod tests {
use std::{fs, path::PathBuf};
use crate::{
bolts::rands::StdRand,
corpus::{Corpus, CorpusScheduler, OnDiskCorpus, QueueCorpusScheduler, Testcase},
inputs::bytes::BytesInput,
state::{HasCorpus, StdState},
utils::StdRand,
};
#[test]
@ -114,7 +114,7 @@ mod tests {
.filename()
.as_ref()
.unwrap()
.to_owned();
.clone();
assert_eq!(filename, "target/.test/fancy/path/fancyfile");

View File

@ -1,7 +1,7 @@
//! Architecture agnostic processor features
#[cfg(not(any(target_arch = "x86_64", target_arch = "x86")))]
use crate::utils::current_nanos;
use crate::bolts::current_nanos;
// TODO: Add more architectures, using C code, see
// https://github.com/google/benchmark/blob/master/src/cycleclock.h

View File

@ -2,7 +2,7 @@
use alloc::{string::ToString, vec::Vec};
use core::{marker::PhantomData, time::Duration};
use core_affinity::CoreId;
use serde::{de::DeserializeOwned, Serialize};
#[cfg(feature = "std")]
@ -14,6 +14,9 @@ use crate::bolts::{
shmem::StdShMemProvider,
};
#[cfg(feature = "std")]
use std::net::{SocketAddr, ToSocketAddrs};
use crate::{
bolts::{
llmp::{self, Flags, LlmpClientDescription, LlmpSender, Tag},
@ -36,14 +39,17 @@ use crate::bolts::{
};
#[cfg(all(feature = "std", windows))]
use crate::utils::startable_self;
use crate::bolts::os::startable_self;
#[cfg(all(feature = "std", unix))]
use crate::utils::{fork, ForkResult};
use crate::bolts::os::{fork, ForkResult};
#[cfg(all(feature = "std", target_os = "android"))]
#[cfg(all(target_os = "android", feature = "std"))]
use crate::bolts::os::ashmem_server::AshmemService;
#[cfg(feature = "std")]
use typed_builder::TypedBuilder;
/// Forward this to the client
const _LLMP_TAG_EVENT_TO_CLIENT: llmp::Tag = 0x2C11E471;
/// Only handle this in the broker
@ -167,6 +173,19 @@ where
matches!(self.llmp, llmp::LlmpConnection::IsBroker { broker: _ })
}
#[cfg(feature = "std")]
pub fn connect_b2b<A>(&mut self, addr: A) -> Result<(), Error>
where
A: ToSocketAddrs,
{
match &mut self.llmp {
llmp::LlmpConnection::IsBroker { broker } => broker.connect_b2b(addr),
llmp::LlmpConnection::IsClient { client: _ } => Err(Error::IllegalState(
"Called broker loop in the client".into(),
)),
}
}
/// Run forever in the broker
pub fn broker_loop(&mut self) -> Result<(), Error> {
match &mut self.llmp {
@ -604,6 +623,17 @@ where
}
}
/// The kind of manager we're creating right now
#[derive(Debug, Clone, Copy)]
pub enum ManagerKind {
/// Any kind will do
Any,
/// A client, getting messages from a local broker.
Client { cpu_core: Option<CoreId> },
/// A [`LlmpBroker`], forwarding the packets of local clients.
Broker,
}
/// Sets up a restarting fuzzer, using the [`StdShMemProvider`], and standard features.
/// The restarting mgr is a combination of restarter and runner, that can be used on systems with and without `fork` support.
/// The restarter will spawn a new process each time the child crashes or timeouts.
@ -621,63 +651,133 @@ pub fn setup_restarting_mgr_std<I, OT, S, ST>(
>
where
I: Input,
S: DeserializeOwned,
ST: Stats + Clone,
OT: ObserversTuple,
S: DeserializeOwned,
ST: Stats,
{
#[cfg(target_os = "android")]
AshmemService::start().expect("Error starting Ashmem Service");
setup_restarting_mgr::<I, OT, S, _, ST>(StdShMemProvider::new()?, stats, broker_port)
RestartingMgr::builder()
.shmem_provider(StdShMemProvider::new()?)
.stats(stats)
.broker_port(broker_port)
.build()
.launch()
}
/// A restarting state is a combination of restarter and runner, that can be used on systems with and without `fork` support.
/// The restarter will start a new process each time the child crashes or timeouts.
/// Provides a `builder` which can be used to build a [`RestartingMgr`], which is a combination of a
/// `restarter` and `runner`, that can be used on systems both with and without `fork` support. The
/// `restarter` will start a new process each time the child crashes or times out.
#[cfg(feature = "std")]
#[allow(
clippy::unnecessary_operation,
clippy::type_complexity,
clippy::similar_names
)] // for { mgr = LlmpEventManager... }
pub fn setup_restarting_mgr<I, OT, S, SP, ST>(
mut shmem_provider: SP,
//mgr: &mut LlmpEventManager<I, OT, S, SH, ST>,
stats: ST,
broker_port: u16,
) -> Result<(Option<S>, LlmpRestartingEventManager<I, OT, S, SP, ST>), Error>
#[allow(clippy::default_trait_access)]
#[derive(TypedBuilder, Debug)]
pub struct RestartingMgr<I, OT, S, SP, ST>
where
I: Input,
S: DeserializeOwned,
OT: ObserversTuple,
S: DeserializeOwned,
SP: ShMemProvider + 'static,
ST: Stats,
//CE: CustomEvent<I>,
{
/// The shared memory provider to use for the broker or client spawned by the restarting
/// manager.
shmem_provider: SP,
/// The stats to use
stats: ST,
/// The broker port to use
#[builder(default = 1337_u16)]
broker_port: u16,
/// The address to connect to
#[builder(default = None)]
remote_broker_addr: Option<SocketAddr>,
/// The type of manager to build
#[builder(default = ManagerKind::Any)]
kind: ManagerKind,
#[builder(setter(skip), default = PhantomData {})]
_phantom: PhantomData<(I, OT, S)>,
}
#[cfg(feature = "std")]
#[allow(clippy::type_complexity)]
#[allow(clippy::too_many_lines)]
impl<I, OT, S, SP, ST> RestartingMgr<I, OT, S, SP, ST>
where
I: Input,
OT: ObserversTuple,
S: DeserializeOwned,
SP: ShMemProvider,
ST: Stats + Clone,
{
/// Launch the restarting manager
pub fn launch(
&mut self,
) -> Result<(Option<S>, LlmpRestartingEventManager<I, OT, S, SP, ST>), Error> {
let mut mgr = LlmpEventManager::<I, OT, S, SP, ST>::new_on_port(
shmem_provider.clone(),
stats,
broker_port,
self.shmem_provider.clone(),
self.stats.clone(),
self.broker_port,
)?;
// We start ourself as child process to actually fuzz
let (sender, mut receiver, mut new_shmem_provider) = if std::env::var(_ENV_FUZZER_SENDER)
let (sender, mut receiver, new_shmem_provider, core_id) = if std::env::var(
_ENV_FUZZER_SENDER,
)
.is_err()
{
if mgr.is_broker() {
// We get here if we are on Unix, or we are a broker on Windows.
let core_id = if mgr.is_broker() {
match self.kind {
ManagerKind::Broker | ManagerKind::Any => {
// Yep, broker. Just loop here.
println!("Doing broker things. Run this tool again to start fuzzing in a client.");
println!(
"Doing broker things. Run this tool again to start fuzzing in a client."
);
if let Some(remote_broker_addr) = self.remote_broker_addr {
println!("B2b: Connecting to {:?}", &remote_broker_addr);
mgr.connect_b2b(remote_broker_addr)?;
};
mgr.broker_loop()?;
return Err(Error::ShuttingDown);
}
ManagerKind::Client { cpu_core: _ } => {
return Err(Error::IllegalState(
"Tried to start a client, but got a broker".to_string(),
));
}
}
} else {
match self.kind {
ManagerKind::Broker => {
return Err(Error::IllegalState(
"Tried to start a broker, but got a client".to_string(),
));
}
ManagerKind::Client { cpu_core } => cpu_core,
ManagerKind::Any => None,
}
};
if let Some(core_id) = core_id {
println!("Setting core affinity to {:?}", core_id);
core_affinity::set_for_current(core_id);
}
// We are the fuzzer respawner in a llmp client
mgr.to_env(_ENV_FUZZER_BROKER_CLIENT_INITIAL);
// First, create a channel from the fuzzer (sender) to us (receiver) to report its state for restarts.
let sender = { LlmpSender::new(shmem_provider.clone(), 0, false)? };
let sender = { LlmpSender::new(self.shmem_provider.clone(), 0, false)? };
let map = { shmem_provider.clone_ref(&sender.out_maps.last().unwrap().shmem)? };
let receiver = LlmpReceiver::on_existing_map(shmem_provider.clone(), map, None)?;
let map = {
self.shmem_provider
.clone_ref(&sender.out_maps.last().unwrap().shmem)?
};
let receiver = LlmpReceiver::on_existing_map(self.shmem_provider.clone(), map, None)?;
// Store the information to a map.
sender.to_env(_ENV_FUZZER_SENDER)?;
receiver.to_env(_ENV_FUZZER_RECEIVER)?;
@ -687,11 +787,20 @@ where
loop {
dbg!("Spawning next client (id {})", ctr);
// On Unix, we fork (todo: measure if that is actually faster.)
// On Unix, we fork
#[cfg(unix)]
let child_status = match unsafe { fork() }? {
ForkResult::Parent(handle) => handle.status(),
ForkResult::Child => break (sender, receiver, shmem_provider),
let child_status = {
self.shmem_provider.pre_fork()?;
match unsafe { fork() }? {
ForkResult::Parent(handle) => {
self.shmem_provider.post_fork(false)?;
handle.status()
}
ForkResult::Child => {
self.shmem_provider.post_fork(true)?;
break (sender, receiver, self.shmem_provider.clone(), core_id);
}
}
};
// On windows, we spawn ourself again
@ -715,19 +824,23 @@ where
ctr = ctr.wrapping_add(1);
}
} else {
// We are the newly started fuzzing instance, first, connect to our own restore map.
// We are the newly started fuzzing instance (i.e. on Windows), first, connect to our own restore map.
// We get here *only on Windows*, if we were started by a restarting fuzzer.
// A sender and a receiver for single communication
// Clone so we get a new connection to the AshmemServer if we are using
// ServedShMemProvider
shmem_provider.post_fork();
(
LlmpSender::on_existing_from_env(shmem_provider.clone(), _ENV_FUZZER_SENDER)?,
LlmpReceiver::on_existing_from_env(shmem_provider.clone(), _ENV_FUZZER_RECEIVER)?,
shmem_provider,
LlmpSender::on_existing_from_env(self.shmem_provider.clone(), _ENV_FUZZER_SENDER)?,
LlmpReceiver::on_existing_from_env(
self.shmem_provider.clone(),
_ENV_FUZZER_RECEIVER,
)?,
self.shmem_provider.clone(),
None,
)
};
new_shmem_provider.post_fork();
if let Some(core_id) = core_id {
core_affinity::set_for_current(core_id);
}
println!("We're a client, let's fuzz :)");
@ -765,4 +878,5 @@ where
*/
Ok((state, mgr))
}
}

View File

@ -256,11 +256,13 @@ mod tests {
use tuple_list::tuple_list_type;
use crate::{
bolts::tuples::{tuple_list, Named},
bolts::{
current_time,
tuples::{tuple_list, Named},
},
events::Event,
inputs::bytes::BytesInput,
observers::StdMapObserver,
utils::current_time,
};
static mut MAP: [u32; 4] = [0; 4];

View File

@ -1,16 +1,18 @@
//! The [`InProcessExecutor`] is a libfuzzer-like executor, that will simply call a function.
//! It should usually be paired with extra error-handling, such as a restarting event manager, to be effective.
use core::marker::PhantomData;
#[cfg(any(unix, all(windows, feature = "std")))]
use core::{
ffi::c_void,
marker::PhantomData,
ptr::{self, write_volatile},
sync::atomic::{compiler_fence, Ordering},
};
#[cfg(unix)]
use crate::bolts::os::unix_signals::setup_signal_handler;
#[cfg(windows)]
#[cfg(all(windows, feature = "std"))]
use crate::bolts::os::windows_exceptions::setup_exception_handler;
use crate::{
@ -64,9 +66,9 @@ where
#[inline]
fn pre_exec(
&mut self,
fuzzer: &mut Z,
state: &mut S,
event_mgr: &mut EM,
_fuzzer: &mut Z,
_state: &mut S,
_event_mgr: &mut EM,
_input: &I,
) -> Result<(), Error> {
#[cfg(unix)]
@ -82,12 +84,12 @@ where
);
// Direct raw pointers access /aliasing is pretty undefined behavior.
// Since the state and event may have moved in memory, refresh them right before the signal may happen
write_volatile(&mut data.state_ptr, state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, event_mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, fuzzer as *mut _ as *mut c_void);
write_volatile(&mut data.state_ptr, _state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, _event_mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, _fuzzer as *mut _ as *mut c_void);
compiler_fence(Ordering::SeqCst);
}
#[cfg(windows)]
#[cfg(all(windows, feature = "std"))]
unsafe {
let data = &mut windows_exception_handler::GLOBAL_STATE;
write_volatile(
@ -100,9 +102,9 @@ where
);
// Direct raw pointers access /aliasing is pretty undefined behavior.
// Since the state and event may have moved in memory, refresh them right before the signal may happen
write_volatile(&mut data.state_ptr, state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, event_mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, fuzzer as *mut _ as *mut c_void);
write_volatile(&mut data.state_ptr, _state as *mut _ as *mut c_void);
write_volatile(&mut data.event_mgr_ptr, _event_mgr as *mut _ as *mut c_void);
write_volatile(&mut data.fuzzer_ptr, _fuzzer as *mut _ as *mut c_void);
compiler_fence(Ordering::SeqCst);
}
Ok(())
@ -124,7 +126,7 @@ where
);
compiler_fence(Ordering::SeqCst);
}
#[cfg(windows)]
#[cfg(all(windows, feature = "std"))]
unsafe {
write_volatile(
&mut windows_exception_handler::GLOBAL_STATE.current_input_ptr,
@ -203,7 +205,7 @@ where
setup_signal_handler(data)?;
compiler_fence(Ordering::SeqCst);
}
#[cfg(windows)]
#[cfg(all(windows, feature = "std"))]
unsafe {
let data = &mut windows_exception_handler::GLOBAL_STATE;
write_volatile(
@ -229,7 +231,7 @@ where
/// Retrieve the harness function.
#[inline]
pub fn harness(&self) -> &H {
self.harness_fn
&self.harness_fn
}
/// Retrieve the harness function for a mutable reference.
@ -356,7 +358,7 @@ mod unix_signal_handler {
#[cfg(feature = "std")]
println!("Timeout in fuzz run.");
#[cfg(feature = "std")]
let _ = stdout().flush();
let _res = stdout().flush();
let input = (data.current_input_ptr as *const I).as_ref().unwrap();
data.current_input_ptr = ptr::null();
@ -471,7 +473,7 @@ mod unix_signal_handler {
target_arch = "aarch64"
))]
{
use crate::utils::find_mapping_for_address;
use crate::bolts::os::find_mapping_for_address;
println!("{:━^100}", " CRASH ");
println!(
"Received signal {} at 0x{:016x}, fault address: 0x{:016x}",
@ -504,7 +506,7 @@ mod unix_signal_handler {
}
#[cfg(feature = "std")]
let _ = stdout().flush();
let _res = stdout().flush();
let input = (data.current_input_ptr as *const I).as_ref().unwrap();
// Make sure we don't crash in the crash handler forever.
@ -549,7 +551,7 @@ mod unix_signal_handler {
}
}
#[cfg(windows)]
#[cfg(all(windows, feature = "std"))]
mod windows_exception_handler {
use alloc::vec::Vec;
use core::{ffi::c_void, ptr};

View File

@ -9,14 +9,13 @@ use num::Integer;
use serde::{Deserialize, Serialize};
use crate::{
bolts::tuples::Named,
bolts::{tuples::Named, AsSlice},
corpus::Testcase,
executors::ExitKind,
feedbacks::{Feedback, FeedbackState, FeedbackStatesTuple},
inputs::Input,
observers::{MapObserver, ObserversTuple},
state::{HasFeedbackStates, HasMetadata},
utils::AsSlice,
Error,
};

View File

@ -1,6 +1,7 @@
//! The `Fuzzer` is the main struct for a fuzz campaign.
use crate::{
bolts::current_time,
corpus::{Corpus, CorpusScheduler, Testcase},
events::{Event, EventManager},
executors::{
@ -13,7 +14,6 @@ use crate::{
stages::StagesTuple,
start_timer,
state::{HasClientPerfStats, HasCorpus, HasExecutions, HasSolutions},
utils::current_time,
Error,
};
@ -353,7 +353,7 @@ where
observers_buf,
corpus_size: state.corpus().count(),
client_config: "TODO".into(),
time: crate::utils::current_time(),
time: current_time(),
executions: *state.executions(),
},
)?;

View File

@ -4,8 +4,8 @@ use alloc::vec::Vec;
use core::{cmp::min, marker::PhantomData};
use crate::{
bolts::rands::Rand,
inputs::{bytes::BytesInput, Input},
utils::Rand,
Error,
};

View File

@ -105,7 +105,7 @@ impl BytesInput {
#[cfg(test)]
mod tests {
use crate::utils::{Rand, StdRand};
use crate::bolts::rands::{Rand, StdRand};
#[test]
fn test_input() {

View File

@ -36,7 +36,6 @@ pub mod observers;
pub mod stages;
pub mod state;
pub mod stats;
pub mod utils;
pub mod fuzzer;
pub use fuzzer::*;
@ -123,6 +122,13 @@ impl From<serde_json::Error> for Error {
}
}
#[cfg(unix)]
impl From<nix::Error> for Error {
fn from(err: nix::Error) -> Self {
Self::Unknown(format!("{:?}", err))
}
}
/// Create an AFL Error from io Error
#[cfg(feature = "std")]
impl From<io::Error> for Error {
@ -157,7 +163,7 @@ impl From<ParseIntError> for Error {
#[cfg(test)]
mod tests {
use crate::{
bolts::tuples::tuple_list,
bolts::{rands::StdRand, tuples::tuple_list},
corpus::{Corpus, InMemoryCorpus, RandCorpusScheduler, Testcase},
executors::{ExitKind, InProcessExecutor},
inputs::BytesInput,
@ -165,7 +171,6 @@ mod tests {
stages::StdMutationalStage,
state::{HasCorpus, StdState},
stats::SimpleStats,
utils::StdRand,
Fuzzer, StdFuzzer,
};

View File

@ -1,12 +1,11 @@
//! A wide variety of mutations used during fuzzing.
use crate::{
bolts::tuples::Named,
bolts::{rands::Rand, tuples::Named},
corpus::Corpus,
inputs::{HasBytesVec, Input},
mutators::{MutationResult, Mutator},
state::{HasCorpus, HasMaxSize, HasRand},
utils::Rand,
Error,
};
@ -1832,13 +1831,14 @@ mod tests {
use super::*;
use crate::{
bolts::tuples::tuple_list,
bolts::tuples::HasLen,
bolts::{
rands::StdRand,
tuples::{tuple_list, HasLen},
},
corpus::{Corpus, InMemoryCorpus},
inputs::BytesInput,
mutators::MutatorsTuple,
state::{HasMetadata, StdState},
utils::StdRand,
};
fn test_mutations<C, I, R, S>() -> impl MutatorsTuple<I, S>

View File

@ -1,7 +1,6 @@
//! The `ScheduledMutator` schedules multiple mutations internally.
use alloc::string::String;
use alloc::vec::Vec;
use alloc::{string::String, vec::Vec};
use core::{
fmt::{self, Debug},
marker::PhantomData,
@ -9,12 +8,15 @@ use core::{
use serde::{Deserialize, Serialize};
use crate::{
bolts::tuples::{tuple_list, NamedTuple},
bolts::{
rands::Rand,
tuples::{tuple_list, NamedTuple},
AsSlice,
},
corpus::Corpus,
inputs::{HasBytesVec, Input},
mutators::{MutationResult, Mutator, MutatorsTuple},
state::{HasCorpus, HasMaxSize, HasMetadata, HasRand},
utils::{AsSlice, Rand},
Error,
};
@ -397,6 +399,7 @@ where
#[cfg(test)]
mod tests {
use crate::{
bolts::rands::{Rand, StdRand, XkcdRand},
corpus::{Corpus, InMemoryCorpus, Testcase},
inputs::{BytesInput, HasBytesVec},
mutators::{
@ -405,7 +408,6 @@ mod tests {
Mutator,
},
state::StdState,
utils::{Rand, StdRand, XkcdRand},
};
#[test]

View File

@ -3,6 +3,9 @@
use alloc::vec::Vec;
use core::marker::PhantomData;
use serde::{Deserialize, Serialize};
#[cfg(feature = "std")]
use crate::mutators::str_decode;
#[cfg(feature = "std")]
use std::{
fs::File,
@ -11,16 +14,12 @@ use std::{
};
use crate::{
bolts::rands::Rand,
inputs::{HasBytesVec, Input},
mutators::{buffer_self_copy, mutations, MutationResult, Mutator, Named},
mutators::{buffer_self_copy, mutations::buffer_copy, MutationResult, Mutator, Named},
state::{HasMaxSize, HasMetadata, HasRand},
utils::Rand,
Error,
};
use mutations::buffer_copy;
#[cfg(feature = "std")]
use crate::mutators::str_decode;
/// A state metadata holding a list of tokens
#[derive(Serialize, Deserialize)]
@ -56,7 +55,7 @@ impl Tokens {
if self.token_vec.contains(token) {
return false;
}
self.token_vec.push(token.to_vec());
self.token_vec.push(token.clone());
true
}
@ -303,7 +302,7 @@ mod tests {
#[cfg(feature = "std")]
#[test]
fn test_read_tokens() {
let _ = fs::remove_file("test.tkns");
let _res = fs::remove_file("test.tkns");
let data = r###"
# comment
token1@123="AAA"
@ -316,6 +315,6 @@ token2="B"
#[cfg(feature = "std")]
println!("Token file entries: {:?}", tokens.tokens());
assert_eq!(tokens.tokens().len(), 2);
let _ = fs::remove_file("test.tkns");
let _res = fs::remove_file("test.tkns");
}
}

View File

@ -8,9 +8,11 @@ use core::time::Duration;
use serde::{Deserialize, Serialize};
use crate::{
bolts::tuples::{MatchName, Named},
bolts::{
current_time,
tuples::{MatchName, Named},
},
executors::HasExecHooks,
utils::current_time,
Error,
};

View File

@ -1,6 +1,7 @@
use core::marker::PhantomData;
use crate::{
bolts::rands::Rand,
corpus::Corpus,
fuzzer::Evaluator,
inputs::Input,
@ -9,7 +10,6 @@ use crate::{
stages::Stage,
start_timer,
state::{HasClientPerfStats, HasCorpus, HasRand},
utils::Rand,
Error,
};

View File

@ -1,6 +1,7 @@
use core::marker::PhantomData;
use crate::{
bolts::rands::Rand,
corpus::{Corpus, CorpusScheduler},
events::EventManager,
executors::{Executor, HasObservers},
@ -9,7 +10,6 @@ use crate::{
observers::ObserversTuple,
stages::{Stage, MutationalStage},
state::{Evaluator, HasCorpus, HasRand},
utils::Rand,
Error,
};

View File

@ -9,7 +9,10 @@ use std::{
};
use crate::{
bolts::serdeany::{SerdeAny, SerdeAnyMap},
bolts::{
rands::Rand,
serdeany::{SerdeAny, SerdeAnyMap},
},
corpus::Corpus,
events::{Event, EventManager, LogSeverity},
feedbacks::FeedbackStatesTuple,
@ -17,7 +20,6 @@ use crate::{
generators::Generator,
inputs::Input,
stats::ClientPerfStats,
utils::Rand,
Error,
};

View File

@ -10,7 +10,7 @@ use alloc::string::ToString;
#[cfg(feature = "introspection")]
use core::convert::TryInto;
use crate::utils::current_time;
use crate::bolts::current_time;
const CLIENT_STATS_TIME_WINDOW_SECS: u64 = 5; // 5 seconds

View File

@ -1,6 +1,6 @@
[package]
name = "libafl_cc"
version = "0.2.1"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>"]
description = "Commodity library to wrap compilers and link LibAFL"
documentation = "https://docs.rs/libafl_cc"

View File

@ -1,6 +1,6 @@
[package]
name = "libafl_derive"
version = "0.2.1"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>"]
description = "Derive proc-macro crate for LibAFL"
documentation = "https://docs.rs/libafl_derive"

View File

@ -1,6 +1,6 @@
[package]
name = "libafl_frida"
version = "0.2.1"
version = "0.3.0"
authors = ["s1341 <github@shmarya.net>"]
description = "Frida backend library for LibAFL"
documentation = "https://docs.rs/libafl_frida"
@ -14,17 +14,18 @@ edition = "2018"
cc = { version = "1.0", features = ["parallel"] }
[dependencies]
libafl = { path = "../libafl", version = "0.2.1", features = ["std", "libafl_derive"] }
libafl_targets = { path = "../libafl_targets", version = "0.2.1" }
libafl = { path = "../libafl", version = "0.3.0", features = ["std", "libafl_derive"] }
libafl_targets = { path = "../libafl_targets", version = "0.3.0" }
nix = "0.20.0"
libc = "0.2.92"
hashbrown = "0.11"
libloading = "0.7.0"
rangemap = "0.1.10"
frida-gum = { version = "0.4.0", git = "https://github.com/s1341/frida-rust", features = [ "auto-download", "backtrace", "event-sink", "invocation-listener"] }
frida-gum-sys = { version = "0.2.4", git = "https://github.com/s1341/frida-rust", features = [ "auto-download", "event-sink", "invocation-listener"] }
frida-gum = { version = "0.4.1", git = "https://github.com/frida/frida-rust", features = [ "auto-download", "backtrace", "event-sink", "invocation-listener"] }
frida-gum-sys = { version = "0.2.4", git = "https://github.com/frida/frida-rust", features = [ "auto-download", "event-sink", "invocation-listener"] }
#frida-gum = { version = "0.4.0", path = "../../frida-rust/frida-gum", features = [ "auto-download", "backtrace", "event-sink", "invocation-listener"] }
#frida-gum-sys = { version = "0.2.4", path = "../../frida-rust/frida-gum-sys", features = [ "auto-download", "event-sink", "invocation-listener"] }
core_affinity = { version = "0.5", git = "https://github.com/s1341/core_affinity_rs" }
regex = "1.4"
dynasmrt = "1.0.1"
capstone = "0.8.0"

View File

@ -8,14 +8,17 @@ this helps finding mem errors early.
use hashbrown::HashMap;
use libafl::{
bolts::{ownedref::OwnedPtr, tuples::Named},
bolts::{
os::{find_mapping_for_address, find_mapping_for_path, walk_self_maps},
ownedref::OwnedPtr,
tuples::Named,
},
corpus::Testcase,
executors::{CustomExitKind, ExitKind, HasExecHooks},
feedbacks::Feedback,
inputs::{HasTargetBytes, Input},
observers::{Observer, ObserversTuple},
state::HasMetadata,
utils::{find_mapping_for_address, find_mapping_for_path, walk_self_maps},
Error, SerdeAny,
};
use nix::{
@ -308,7 +311,6 @@ impl Allocator {
let mut offset_to_closest = i64::max_value();
let mut closest = None;
for metadata in metadatas {
println!("{:#x}", metadata.address);
let new_offset = if hint_base == metadata.address {
(ptr as i64 - metadata.address as i64).abs()
} else {
@ -867,8 +869,6 @@ impl AsanRuntime {
};
assert!(unsafe { getrlimit64(3, &mut stack_rlimit as *mut rlimit64) } == 0);
println!("stack_rlimit: {:?}", stack_rlimit);
let max_start = end - stack_rlimit.rlim_cur as usize;
if start != max_start {

View File

@ -4,15 +4,18 @@ use std::hash::Hasher;
use libafl::inputs::{HasTargetBytes, Input};
#[cfg(any(target_os = "linux", target_os = "android"))]
use libafl::utils::find_mapping_for_path;
use libafl::bolts::os::find_mapping_for_path;
use libafl_targets::drcov::{DrCovBasicBlock, DrCovWriter};
#[cfg(target_arch = "aarch64")]
use capstone::arch::{
arch::{self, BuildsCapstone},
use capstone::{
arch::{
self,
arm64::{Arm64Extender, Arm64OperandType, Arm64Shift},
ArchOperand::Arm64Operand,
BuildsCapstone,
},
Capstone, Insn,
};
@ -359,6 +362,7 @@ impl<'a> FridaInstrumentationHelper<'a> {
shift: Arm64Shift,
extender: Arm64Extender,
) {
let redzone_size = frida_gum_sys::GUM_RED_ZONE_SIZE as i32;
let writer = output.writer();
let basereg = self.writer_register(basereg);

View File

@ -5,8 +5,12 @@ It can report coverage and, on supported architecutres, even reports memory acce
/// The frida address sanitizer runtime
pub mod asan_rt;
/// The `LibAFL` firda helper
/// The `LibAFL` frida helper
pub mod helper;
// for parsing asan cores
use libafl::bolts::os::parse_core_bind_arg;
// for getting current core_id
use core_affinity::get_core_ids;
/// A representation of the various Frida options
#[derive(Clone, Debug)]
@ -31,6 +35,7 @@ impl FridaOptions {
#[must_use]
pub fn parse_env_options() -> Self {
let mut options = Self::default();
let mut asan_cores = None;
if let Ok(env_options) = std::env::var("LIBAFL_FRIDA_OPTIONS") {
for option in env_options.trim().split(':') {
@ -40,7 +45,6 @@ impl FridaOptions {
match name {
"asan" => {
options.enable_asan = value.parse().unwrap();
#[cfg(not(target_arch = "aarch64"))]
if options.enable_asan {
panic!("ASAN is not currently supported on targets other than aarch64");
@ -55,6 +59,9 @@ impl FridaOptions {
"asan-allocation-backtraces" => {
options.enable_asan_allocation_backtraces = value.parse().unwrap();
}
"asan-cores" => {
asan_cores = parse_core_bind_arg(value);
}
"instrument-suppress-locations" => {
options.instrument_suppress_locations = Some(
value
@ -92,6 +99,19 @@ impl FridaOptions {
panic!("unknown FRIDA option: '{}'", option);
}
}
} // end of for loop
if options.enable_asan {
if let Some(asan_cores) = asan_cores {
let core_ids = get_core_ids().unwrap();
assert_eq!(
core_ids.len(),
1,
"Client should only be enabled on one core"
);
let core_id = core_ids[0].id;
options.enable_asan = asan_cores.contains(&core_id);
}
}
}

View File

@ -1,6 +1,6 @@
[package]
name = "libafl_targets"
version = "0.2.1"
version = "0.3.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>"]
description = "Common code for target instrumentation that can be used combined with LibAFL"
documentation = "https://docs.rs/libafl_targets"