Make Launcher use ClientDescription instead of CoreId (#2676)
* launcher now uses client_id instead of core_id * adding overcommit to an example fuzzer * Replace addr_of with &raw across the codebase (#2669) * Replace addr_of with &raw across the codebase * fix fixes * more fix * undo clang fmt? * oops * fix? * allocator fix * more fix * more more * more docs * more fix * mas mas mas * hm * more * fix Frida * needed * more error * qemu * Introduce workspace (again) (#2673) * Trying to redo workspace deps again after #2672 * unused * clippy * Replace addr_of with &raw across the codebase (#2669) * Replace addr_of with &raw across the codebase * fix fixes * more fix * undo clang fmt? * oops * fix? * allocator fix * more fix * more more * more docs * more fix * mas mas mas * hm * more * fix Frida * needed * more error * qemu * Introduce workspace (again) (#2673) * Trying to redo workspace deps again after #2672 * unused * clippy * fixing formatting issues * cloning values to make borrow checker happy * simplifying cfg constraints, removing excessive clippy allows * printing clang version that is used to find inconsistencies between CI and local formatting * some fixes according to the CI * Specifying types * improved logging for formatter * more attempts at logging for the CI formatting * fixing setting LLVM version in formatting in CI * fixing cippy allows * renaming launcher's ClientID to ClientDescription * Lower capped RAND generators (#2671) * Lower capped rand generators * Updated all references to RAND generators * Formatting updates * New RAND bytes generator constructor * Revert "Updated all references to RAND generators" This reverts commit 9daad894b25ec3867daf93c4fe67c03abec1d8c6. * Revert "Formatting updates" This reverts commit ff2a61a366c48b3f313878f62409e51b1e1ed663. * cargo nightly format * Added must_use to with_min_size * fix error '#' is not followed by a macro parameter (#2678) * Use version.workspace (#2682) * LibAFL_QEMU: Don't return a generic Address from Register reads (#2681) * LibAFL_QEMU: Make ReadReg always return GuestReg type * Don't return a generic address * fix fuzzers * fix mips * Add DrCovReader to read DrCov files and DrCov dumper and merge utils (#2680) * Add DrCov Reader * Removed libafl_jumper deps * Fix DrCovWriter, add dump_drcov_addrs * Taplo * Move frida from usize to u64 * DrCov usize=>u64 * Better error print * More u64 * ? * debug * clippy * clippy * Add Merge option to DrCovReader * Add drcov_merge tool * Move folder around * DrCov * More assert * fmt * Move around * Fix print * Add option to read multiple files/full folders * Fix build_all_fuzzers.sh for local runs (#2686) * Add Intel PT tracing support (#2471) * WIP: IntelPT qemu systemmode * use perf-event-open-sys instead of bindgen * intelPT Add enable and disable tracing, add test * Use static_assertions crate * Fix volatiles, finish test * Add Intel PT availability check * Use LibAFL errors in Result * Improve filtering * Add KVM pt_mode check * move static_assertions use * Check for perf_event_open support * Add (empty) IntelPT module * Add IntelPTModule POC * partial ideas to implement intel pt * forgot smth * trace decoding draft * add libipt decoder * use cpuid instead of reading /proc/cpuinfo * investigating nondeterministic behaviour * intel_pt module add thread creation hook * Fully identify deps versions Cargo docs: Although it looks like a specific version of the crate, it actually specifies a range of versions and allows SemVer compatible updates * Move mem image to module, output to file for debug * fixup! Use static_assertions crate * Exclude host kernel from traces * Bump libipt-rs * Callback to get memory as an alterantive to image * WIP Add bootloader fuzzer example * Split availability check: add availability_with_qemu * Move IntelPT to observer * Improve test docs * Clippy happy now * Taplo happy now * Add IntelPTObserver boilerplate * Hook instead of Observer * Clippy & Taplo * Add psb_freq setting * Extremely bad and dirty babyfuzzer stealing * Use thread local cell instead of mutex * Try a trace diff based naive feedback * fix perf aux buffer wrap handling * Use f64 for feedback score * Fix clippy for cargo test * Add config format tests * WIP intelpt babyfuzzer with fork * Fix not wrapped tail offset in split buffer * Baby PT with raw traces diff working * Cache nr_filters * Use Lazy_lock for perf_type * Add baby_fuzzer_intel_pt * restore baby fuzzer * baby_fuzzer with block decoder * instruction decoder instead of block * Fix after upstream merge * OwnedRefMut instead of Cow * Read mem directly instead of going through files * Fix cache lifetime and tail update * clippy * Taplo * Compile caps only on linux * clippy * Fail compilation on unsupported OSes * Add baby_fuzzer_intel_pt to CI * Cleanup * Move intel pt + linux check * fix baby pt * rollback forkexecutor * Remove unused dep * Cleanup * Lints * Compute an edge id instead of using only block ip * Binary only intelPT POC * put linux specific code behind target_os=linux * Clippy & Taplo * fix CI * Disable relocation * No unwrap in decode * No expect in decode * Better logging, smaller aux buffer * add IntelPTBuilder * some lints * Add exclude_hv config * Per CPU tracing and inheritance * Parametrize buffer size * Try not to break commandExecutor API pt.1 * Try not to break commandExecutor API pt.2 * Try not to break commandExecutor API pt.3 * fix baby PT * Support on_crash & on_timeout callbacks for libafl_qemu modules (#2620) * support (unsafe) on_crash / on_timeout callbacks for modules * use libc types in bindgen * Move common code to bolts * Cleanup * Revert changes to backtrace_baby_fuzzers/command_executor * Move intel_pt in one file * Use workspace deps * add nr_addr_filter fallback * Cleaning * Improve decode * Clippy * Improve errors and docs * Impl from<PtError> for libafl::Error * Merge hooks * Docs * Clean command executor * fix baby PT * fix baby PT warnings * decoder fills the map with no vec alloc * WIP command executor intel PT * filter_map() instead of filter().map() * fix docs * fix windows? * Baby lints * Small cleanings * Use personality to disable ASLR at runtime * Fix nix dep * Use prc-maps in babyfuzzer * working ET_DYN elf * Cleanup Cargo.toml * Clean command executor * introduce PtraceCommandConfigurator * Fix clippy & taplo * input via stdin * libipt as workspace dep * Check kernel version * support Arg input location * Reorder stuff * File input * timeout support for PtraceExec * Lints * Move out method not needing self form IntelPT * unimplemented * Lints * Move intel_pt_baby_fuzzer * Move intel_pt_command_executor * Document the need for smp_rmb * Better comment * Readme and Makefile.toml instead of build.rs * Move out from libafl_bolts to libafl_intelpt * Fix hooks * (Almost) fix intel_pt command exec * fix intel_pt command exec debug * Fix baby_fuzzer * &raw over addr_of! * cfg(target_os = "linux") * bolts Cargo.toml leftover * minimum wage README.md * extract join_split_trace from decode * extract decode_block from decode * add 1 to `previous_block_ip` to avoid that all the recursive basic blocks map to 0 * More generic hook * fix windows * Update CI, fmt * No bitbybit * Fix docker? * Fix Apple silicon? * Use old libipt from crates.io --------- Co-authored-by: Romain Malmain <romain.malmain@pm.me> Co-authored-by: Dominik Maier <domenukk@gmail.com> * libafl-fuzz: introduce nyx_mode (#2503) * add nyx_mode * fix frida ci? * damn clippy * clippy * LibAFL: Remove `tui_monitor` from default features (#2685) * No Usermode default * no tui * gg * try fix CI * fmt --------- Co-authored-by: Dominik Maier <dmnk@google.com> * Actually make ConstMapObserver work, introduce `nonnull_raw_mut` macro (#2687) * Actually make ConstMapObserver work * fixes * does that work? * mas * Feature: libafl-fuzzfuzzbench (#2689) * fuzzbench * clippy * fmt * fix unicorn CI? * Move bitfields to bitbybit (#2688) * move to bitbybit * Restore bitbybit dependent code * Clippy * Fix NautilusContext::from_file for python files (#2690) * Bump to 0.14.0 (#2692) * Fix versions in libafl and libafl_intelpt for crates.io (#2693) * Fix versions in libafl and libafl_intelpt for crates * Add libafl_intelpt to publish * StdMOptMutator:🆕 remove unused type parameter (#2695) `I` is unused in `::new` and thus requires callers to explicitly specify any type as it can't be determined by type inference. Clippy's `extra_unused_type_parameters` should pick this up, but is tuned a bit too conservative in order to avoid false positives AFAICT. * Move test_harness from source directory to OUT_DIR (#2694) * remove test_harness from source directory * fmt * Add package.metadata.docs.rs for libafl_intelpt (#2696) * libafl-fuzz: fix cmplog running on inputs more than once (#2697) * libafl-fuzz: fix cmplog running on inputs more than once * fmt * fix afl++ cmplog header * update to latest afl stable commit * Libafl workspace internal deps in workspace Cargo.toml (#2691) * Add internal deps to workspace * libafl: use workspace internal deps * libafl_bolts: use workspace internal deps * 0.14.0 * use workspace internal deps * Fix tui monitor for example fuzzers (#2699) * Fix tui monitor for example fuzzers * New clippy lint * fix * Update pyo3-build-config requirement from 0.22.3 to 0.23.1 (#2701) Updates the requirements on [pyo3-build-config](https://github.com/pyo3/pyo3) to permit the latest version. - [Release notes](https://github.com/pyo3/pyo3/releases) - [Changelog](https://github.com/PyO3/pyo3/blob/main/CHANGELOG.md) - [Commits](https://github.com/pyo3/pyo3/compare/v0.22.3...v0.23.1) --- updated-dependencies: - dependency-name: pyo3-build-config dependency-type: direct:production ... Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> * bolts: fix build for tiers 3 platforms. (#2700) cater to platforms knowingly support this feature instead. * Pre init module hooks (#2704) * differenciate pre qemu init and post qemu init hooks * api breakage: Emulator::new_with_qemu is not public anymore. * Fix edge module generators (#2702) * fix generators * fix metadata removal for ExecutionCountRestartHelper (#2705) * Ignore pyo3 update (#2709) * libafl-fuzz: feature-flag nyx mode (#2712) * Bump ctor dependency to make nightly compile again (#2713) * Batched timeout doc (#2716) * timeout doc * clp * FMT * More batched timeout doc (#2717) * timeout doc * clp * FMT * more * fixing an overexited cast * renaming variables * removing unnecessary brackets * fixing imports * fixing imports * renaming more variables * even more variable renaming * removing duplicate clap short options * reverting mistaken variable renaming * comparing the actual cores instead of an enumeration index --------- Signed-off-by: dependabot[bot] <support@github.com> Co-authored-by: Dominik Maier <domenukk@gmail.com> Co-authored-by: Subhojeet Mukherjee, PhD <57270300+CowBoy4mH3LL@users.noreply.github.com> Co-authored-by: jejuisland87654 <jejuisland87654@gmail.com> Co-authored-by: Marco C. <46560192+Marcondiro@users.noreply.github.com> Co-authored-by: Dongjia "toka" Zhang <tokazerkje@outlook.com> Co-authored-by: Romain Malmain <romain.malmain@pm.me> Co-authored-by: Aarnav <aarnav@srlabs.de> Co-authored-by: Dominik Maier <dmnk@google.com> Co-authored-by: Andrea Fioraldi <andreafioraldi@gmail.com> Co-authored-by: Mrmaxmeier <3913977+Mrmaxmeier@users.noreply.github.com> Co-authored-by: Sharad Khanna <sharad@mineo333.dev> Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com> Co-authored-by: David CARLIER <devnexen@gmail.com> Co-authored-by: Henry Chu <henrytech@outlook.com>
This commit is contained in:
parent
0d0bbf0c5d
commit
bdde109867
@ -5,7 +5,9 @@ use std::{path::PathBuf, ptr::null};
|
||||
use frida_gum::Gum;
|
||||
use libafl::{
|
||||
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus},
|
||||
events::{launcher::Launcher, llmp::LlmpRestartingEventManager, EventConfig},
|
||||
events::{
|
||||
launcher::Launcher, llmp::LlmpRestartingEventManager, ClientDescription, EventConfig,
|
||||
},
|
||||
executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor},
|
||||
feedback_or, feedback_or_fast,
|
||||
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
|
||||
@ -93,13 +95,17 @@ unsafe fn fuzz(
|
||||
|
||||
let shmem_provider = StdShMemProvider::new()?;
|
||||
|
||||
let mut run_client = |state: Option<_>, mgr: LlmpRestartingEventManager<_, _, _>, core_id| {
|
||||
let mut run_client = |state: Option<_>,
|
||||
mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
client_description: ClientDescription| {
|
||||
// The restarting state will spawn the same process again as child, then restarted it each time it crashes.
|
||||
|
||||
// println!("{:?}", mgr.mgr_id());
|
||||
|
||||
if options.asan && options.asan_cores.contains(core_id) {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
if options.asan && options.asan_cores.contains(client_description.core_id()) {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -222,9 +228,11 @@ unsafe fn fuzz(
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
} else if options.cmplog && options.cmplog_cores.contains(core_id) {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
})(state, mgr, client_description)
|
||||
} else if options.cmplog && options.cmplog_cores.contains(client_description.core_id()) {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -356,9 +364,11 @@ unsafe fn fuzz(
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
})(state, mgr, client_description)
|
||||
} else {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -473,7 +483,7 @@ unsafe fn fuzz(
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
})(state, mgr, client_description)
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -5,7 +5,9 @@ use std::path::PathBuf;
|
||||
use frida_gum::Gum;
|
||||
use libafl::{
|
||||
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus},
|
||||
events::{launcher::Launcher, llmp::LlmpRestartingEventManager, EventConfig},
|
||||
events::{
|
||||
launcher::Launcher, llmp::LlmpRestartingEventManager, ClientDescription, EventConfig,
|
||||
},
|
||||
executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor},
|
||||
feedback_or, feedback_or_fast,
|
||||
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
|
||||
@ -73,7 +75,9 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
|
||||
let shmem_provider = StdShMemProvider::new()?;
|
||||
|
||||
let mut run_client = |state: Option<_>, mgr: LlmpRestartingEventManager<_, _, _>, core_id| {
|
||||
let mut run_client = |state: Option<_>,
|
||||
mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
client_description: ClientDescription| {
|
||||
// The restarting state will spawn the same process again as child, then restarted it each time it crashes.
|
||||
|
||||
// println!("{:?}", mgr.mgr_id());
|
||||
@ -90,8 +94,10 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
ExitKind::Ok
|
||||
};
|
||||
|
||||
if options.asan && options.asan_cores.contains(core_id) {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
if options.asan && options.asan_cores.contains(client_description.core_id()) {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -214,9 +220,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
} else if options.cmplog && options.cmplog_cores.contains(core_id) {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
})(state, mgr, client_description)
|
||||
} else if options.cmplog && options.cmplog_cores.contains(client_description.core_id()) {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -349,9 +357,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
})(state, mgr, client_description)
|
||||
} else {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -466,7 +476,7 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
})(state, mgr, client_description)
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -22,7 +22,9 @@ use std::path::PathBuf;
|
||||
use frida_gum::Gum;
|
||||
use libafl::{
|
||||
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus},
|
||||
events::{launcher::Launcher, llmp::LlmpRestartingEventManager, EventConfig},
|
||||
events::{
|
||||
launcher::Launcher, llmp::LlmpRestartingEventManager, ClientDescription, EventConfig,
|
||||
},
|
||||
executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor},
|
||||
feedback_and_fast, feedback_or, feedback_or_fast,
|
||||
feedbacks::{ConstFeedback, CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
|
||||
@ -82,7 +84,9 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
|
||||
let shmem_provider = StdShMemProvider::new()?;
|
||||
|
||||
let mut run_client = |state: Option<_>, mgr: LlmpRestartingEventManager<_, _, _>, core_id| {
|
||||
let mut run_client = |state: Option<_>,
|
||||
mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
client_description: ClientDescription| {
|
||||
// The restarting state will spawn the same process again as child, then restarted it each time it crashes.
|
||||
|
||||
// println!("{:?}", mgr.mgr_id());
|
||||
@ -99,8 +103,10 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
ExitKind::Ok
|
||||
};
|
||||
|
||||
if options.asan && options.asan_cores.contains(core_id) {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
if options.asan && options.asan_cores.contains(client_description.core_id()) {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -212,9 +218,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
} else if options.cmplog && options.cmplog_cores.contains(core_id) {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
})(state, mgr, client_description)
|
||||
} else if options.cmplog && options.cmplog_cores.contains(client_description.core_id()) {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -340,9 +348,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
})(state, mgr, client_description)
|
||||
} else {
|
||||
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| {
|
||||
(|state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
_client_description| {
|
||||
let gum = Gum::obtain();
|
||||
|
||||
let coverage = CoverageRuntime::new();
|
||||
@ -454,7 +464,7 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
|
||||
.unwrap();
|
||||
|
||||
Ok(())
|
||||
})(state, mgr, core_id)
|
||||
})(state, mgr, client_description)
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -8,7 +8,10 @@ use std::{env, fmt::Write, fs::DirEntry, io, path::PathBuf, process};
|
||||
use clap::{builder::Str, Parser};
|
||||
use libafl::{
|
||||
corpus::{Corpus, NopCorpus},
|
||||
events::{launcher::Launcher, EventConfig, EventRestarter, LlmpRestartingEventManager},
|
||||
events::{
|
||||
launcher::Launcher, ClientDescription, EventConfig, EventRestarter,
|
||||
LlmpRestartingEventManager,
|
||||
},
|
||||
executors::ExitKind,
|
||||
fuzzer::StdFuzzer,
|
||||
inputs::{BytesInput, HasTargetBytes},
|
||||
@ -191,8 +194,10 @@ pub fn fuzz() {
|
||||
ExitKind::Ok
|
||||
};
|
||||
|
||||
let mut run_client =
|
||||
|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, core_id| {
|
||||
let mut run_client = |state: Option<_>,
|
||||
mut mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
client_description: ClientDescription| {
|
||||
let core_id = client_description.core_id();
|
||||
let core_idx = options
|
||||
.cores
|
||||
.position(core_id)
|
||||
|
@ -2,12 +2,13 @@ use std::env;
|
||||
|
||||
use libafl::{
|
||||
corpus::{InMemoryOnDiskCorpus, OnDiskCorpus},
|
||||
events::ClientDescription,
|
||||
inputs::BytesInput,
|
||||
monitors::Monitor,
|
||||
state::StdState,
|
||||
Error,
|
||||
};
|
||||
use libafl_bolts::{core_affinity::CoreId, rands::StdRand, tuples::tuple_list};
|
||||
use libafl_bolts::{rands::StdRand, tuples::tuple_list};
|
||||
#[cfg(feature = "injections")]
|
||||
use libafl_qemu::modules::injections::InjectionModule;
|
||||
use libafl_qemu::{
|
||||
@ -61,8 +62,9 @@ impl Client<'_> {
|
||||
&self,
|
||||
state: Option<ClientState>,
|
||||
mgr: ClientMgr<M>,
|
||||
core_id: CoreId,
|
||||
client_description: ClientDescription,
|
||||
) -> Result<(), Error> {
|
||||
let core_id = client_description.core_id();
|
||||
let mut args = self.args()?;
|
||||
Harness::edit_args(&mut args);
|
||||
log::debug!("ARGS: {:#?}", args);
|
||||
@ -123,7 +125,7 @@ impl Client<'_> {
|
||||
.qemu(qemu)
|
||||
.harness(harness)
|
||||
.mgr(mgr)
|
||||
.core_id(core_id)
|
||||
.client_description(client_description)
|
||||
.extra_tokens(extra_tokens);
|
||||
|
||||
if self.options.rerun_input.is_some() && self.options.drcov.is_some() {
|
||||
|
@ -10,7 +10,7 @@ use libafl::events::SimpleEventManager;
|
||||
#[cfg(not(feature = "simplemgr"))]
|
||||
use libafl::events::{EventConfig, Launcher, MonitorTypedEventManager};
|
||||
use libafl::{
|
||||
events::{LlmpEventManager, LlmpRestartingEventManager},
|
||||
events::{ClientDescription, LlmpEventManager, LlmpRestartingEventManager},
|
||||
monitors::{tui::TuiMonitor, Monitor, MultiMonitor},
|
||||
Error,
|
||||
};
|
||||
@ -124,7 +124,7 @@ impl Fuzzer {
|
||||
.unwrap(),
|
||||
StateRestorer::new(shmem_provider.new_shmem(0x1000).unwrap()),
|
||||
)),
|
||||
CoreId(0),
|
||||
ClientDescription::new(0, 0, CoreId(0)),
|
||||
);
|
||||
}
|
||||
|
||||
|
@ -7,7 +7,7 @@ use libafl::events::SimpleEventManager;
|
||||
use libafl::events::{LlmpRestartingEventManager, MonitorTypedEventManager};
|
||||
use libafl::{
|
||||
corpus::{Corpus, InMemoryOnDiskCorpus, OnDiskCorpus},
|
||||
events::{EventRestarter, NopEventManager},
|
||||
events::{ClientDescription, EventRestarter, NopEventManager},
|
||||
executors::{Executor, ShadowExecutor},
|
||||
feedback_or, feedback_or_fast,
|
||||
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
|
||||
@ -32,7 +32,6 @@ use libafl::{
|
||||
#[cfg(not(feature = "simplemgr"))]
|
||||
use libafl_bolts::shmem::StdShMemProvider;
|
||||
use libafl_bolts::{
|
||||
core_affinity::CoreId,
|
||||
ownedref::OwnedMutSlice,
|
||||
rands::StdRand,
|
||||
tuples::{tuple_list, Merge, Prepend},
|
||||
@ -66,7 +65,7 @@ pub struct Instance<'a, M: Monitor> {
|
||||
harness: Option<Harness>,
|
||||
qemu: Qemu,
|
||||
mgr: ClientMgr<M>,
|
||||
core_id: CoreId,
|
||||
client_description: ClientDescription,
|
||||
#[builder(default)]
|
||||
extra_tokens: Vec<String>,
|
||||
#[builder(default=PhantomData)]
|
||||
@ -161,10 +160,12 @@ impl<M: Monitor> Instance<'_, M> {
|
||||
// RNG
|
||||
StdRand::new(),
|
||||
// Corpus that will be evolved, we keep it in memory for performance
|
||||
InMemoryOnDiskCorpus::no_meta(self.options.queue_dir(self.core_id))?,
|
||||
InMemoryOnDiskCorpus::no_meta(
|
||||
self.options.queue_dir(self.client_description.clone()),
|
||||
)?,
|
||||
// Corpus in which we store solutions (crashes in this example),
|
||||
// on disk so the user can get them after stopping the fuzzer
|
||||
OnDiskCorpus::new(self.options.crashes_dir(self.core_id))?,
|
||||
OnDiskCorpus::new(self.options.crashes_dir(self.client_description.clone()))?,
|
||||
// States of the feedbacks.
|
||||
// The feedbacks can report the data that should persist in the State.
|
||||
&mut feedback,
|
||||
@ -238,7 +239,10 @@ impl<M: Monitor> Instance<'_, M> {
|
||||
process::exit(0);
|
||||
}
|
||||
|
||||
if self.options.is_cmplog_core(self.core_id) {
|
||||
if self
|
||||
.options
|
||||
.is_cmplog_core(self.client_description.core_id())
|
||||
{
|
||||
// Create a QEMU in-process executor
|
||||
let executor = QemuExecutor::new(
|
||||
emulator,
|
||||
|
@ -2,7 +2,7 @@ use core::time::Duration;
|
||||
use std::{env, ops::Range, path::PathBuf};
|
||||
|
||||
use clap::{error::ErrorKind, CommandFactory, Parser};
|
||||
use libafl::Error;
|
||||
use libafl::{events::ClientDescription, Error};
|
||||
use libafl_bolts::core_affinity::{CoreId, Cores};
|
||||
use libafl_qemu::GuestAddr;
|
||||
|
||||
@ -144,20 +144,20 @@ impl FuzzerOptions {
|
||||
PathBuf::from(&self.input)
|
||||
}
|
||||
|
||||
pub fn output_dir(&self, core_id: CoreId) -> PathBuf {
|
||||
pub fn output_dir(&self, client_description: ClientDescription) -> PathBuf {
|
||||
let mut dir = PathBuf::from(&self.output);
|
||||
dir.push(format!("cpu_{:03}", core_id.0));
|
||||
dir.push(format!("client_{:03}", client_description.id()));
|
||||
dir
|
||||
}
|
||||
|
||||
pub fn queue_dir(&self, core_id: CoreId) -> PathBuf {
|
||||
let mut dir = self.output_dir(core_id).clone();
|
||||
pub fn queue_dir(&self, client_description: ClientDescription) -> PathBuf {
|
||||
let mut dir = self.output_dir(client_description).clone();
|
||||
dir.push("queue");
|
||||
dir
|
||||
}
|
||||
|
||||
pub fn crashes_dir(&self, core_id: CoreId) -> PathBuf {
|
||||
let mut dir = self.output_dir(core_id).clone();
|
||||
pub fn crashes_dir(&self, client_description: ClientDescription) -> PathBuf {
|
||||
let mut dir = self.output_dir(client_description).clone();
|
||||
dir.push("crashes");
|
||||
dir
|
||||
}
|
||||
|
@ -71,8 +71,6 @@ mod feedback;
|
||||
mod scheduler;
|
||||
mod stages;
|
||||
use clap::Parser;
|
||||
#[cfg(not(feature = "fuzzbench"))]
|
||||
use corpus::remove_main_node_file;
|
||||
use corpus::{check_autoresume, create_dir_if_not_exists};
|
||||
mod corpus;
|
||||
mod executor;
|
||||
@ -80,19 +78,23 @@ mod fuzzer;
|
||||
mod hooks;
|
||||
use env_parser::parse_envs;
|
||||
use fuzzer::run_client;
|
||||
#[cfg(feature = "fuzzbench")]
|
||||
use libafl::events::SimpleEventManager;
|
||||
#[cfg(not(feature = "fuzzbench"))]
|
||||
use libafl::events::{CentralizedLauncher, EventConfig};
|
||||
#[cfg(not(feature = "fuzzbench"))]
|
||||
use libafl::monitors::MultiMonitor;
|
||||
#[cfg(feature = "fuzzbench")]
|
||||
use libafl::monitors::SimpleMonitor;
|
||||
use libafl::{schedulers::powersched::BaseSchedule, Error};
|
||||
use libafl_bolts::core_affinity::{CoreId, Cores};
|
||||
#[cfg(not(feature = "fuzzbench"))]
|
||||
use libafl_bolts::shmem::{ShMemProvider, StdShMemProvider};
|
||||
use libafl_bolts::core_affinity::Cores;
|
||||
use nix::sys::signal::Signal;
|
||||
#[cfg(not(feature = "fuzzbench"))]
|
||||
use {
|
||||
corpus::remove_main_node_file,
|
||||
libafl::{
|
||||
events::{CentralizedLauncher, ClientDescription, EventConfig},
|
||||
monitors::MultiMonitor,
|
||||
},
|
||||
libafl_bolts::shmem::{ShMemProvider, StdShMemProvider},
|
||||
};
|
||||
#[cfg(feature = "fuzzbench")]
|
||||
use {
|
||||
libafl::{events::SimpleEventManager, monitors::SimpleMonitor},
|
||||
libafl_bolts::core_affinity::CoreId,
|
||||
};
|
||||
|
||||
const AFL_DEFAULT_INPUT_LEN_MAX: usize = 1_048_576;
|
||||
const AFL_DEFAULT_INPUT_LEN_MIN: usize = 1;
|
||||
@ -139,22 +141,48 @@ fn main() {
|
||||
.shmem_provider(shmem_provider)
|
||||
.configuration(EventConfig::from_name("default"))
|
||||
.monitor(monitor)
|
||||
.main_run_client(|state: Option<_>, mgr: _, core_id: CoreId| {
|
||||
println!("run primary client on core {}", core_id.0);
|
||||
.main_run_client(
|
||||
|state: Option<_>, mgr: _, client_description: ClientDescription| {
|
||||
println!(
|
||||
"run primary client with id {} on core {}",
|
||||
client_description.id(),
|
||||
client_description.core_id().0
|
||||
);
|
||||
let fuzzer_dir = opt.output_dir.join("fuzzer_main");
|
||||
let _ = check_autoresume(&fuzzer_dir, opt.auto_resume).unwrap();
|
||||
let res = run_client(state, mgr, &fuzzer_dir, core_id, &opt, true);
|
||||
let res = run_client(
|
||||
state,
|
||||
mgr,
|
||||
&fuzzer_dir,
|
||||
client_description.core_id(),
|
||||
&opt,
|
||||
true,
|
||||
);
|
||||
let _ = remove_main_node_file(&fuzzer_dir);
|
||||
res
|
||||
})
|
||||
.secondary_run_client(|state: Option<_>, mgr: _, core_id: CoreId| {
|
||||
println!("run secondary client on core {}", core_id.0);
|
||||
},
|
||||
)
|
||||
.secondary_run_client(
|
||||
|state: Option<_>, mgr: _, client_description: ClientDescription| {
|
||||
println!(
|
||||
"run secondary client with id {} on core {}",
|
||||
client_description.id(),
|
||||
client_description.core_id().0
|
||||
);
|
||||
let fuzzer_dir = opt
|
||||
.output_dir
|
||||
.join(format!("fuzzer_secondary_{}", core_id.0));
|
||||
.join(format!("fuzzer_secondary_{}", client_description.id()));
|
||||
let _ = check_autoresume(&fuzzer_dir, opt.auto_resume).unwrap();
|
||||
run_client(state, mgr, &fuzzer_dir, core_id, &opt, false)
|
||||
})
|
||||
run_client(
|
||||
state,
|
||||
mgr,
|
||||
&fuzzer_dir,
|
||||
client_description.core_id(),
|
||||
&opt,
|
||||
false,
|
||||
)
|
||||
},
|
||||
)
|
||||
.cores(&opt.cores.clone().expect("invariant; should never occur"))
|
||||
.broker_port(opt.broker_port.unwrap_or(AFL_DEFAULT_BROKER_PORT))
|
||||
.build()
|
||||
|
@ -2,7 +2,7 @@ use std::path::{Path, PathBuf};
|
||||
|
||||
use libafl::{
|
||||
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus, Testcase},
|
||||
events::{launcher::Launcher, EventConfig},
|
||||
events::{launcher::Launcher, ClientDescription, EventConfig},
|
||||
feedbacks::{CrashFeedback, MaxMapFeedback},
|
||||
inputs::BytesInput,
|
||||
monitors::MultiMonitor,
|
||||
@ -14,7 +14,7 @@ use libafl::{
|
||||
Error, Fuzzer, StdFuzzer,
|
||||
};
|
||||
use libafl_bolts::{
|
||||
core_affinity::{CoreId, Cores},
|
||||
core_affinity::Cores,
|
||||
rands::StdRand,
|
||||
shmem::{ShMemProvider, StdShMemProvider},
|
||||
tuples::tuple_list,
|
||||
@ -31,10 +31,12 @@ fn main() {
|
||||
let parent_cpu_id = cores.ids.first().expect("unable to get first core id");
|
||||
|
||||
// region: fuzzer start function
|
||||
let mut run_client = |state: Option<_>, mut restarting_mgr, core_id: CoreId| {
|
||||
let mut run_client = |state: Option<_>,
|
||||
mut restarting_mgr,
|
||||
client_description: ClientDescription| {
|
||||
// nyx stuff
|
||||
let settings = NyxSettings::builder()
|
||||
.cpu_id(core_id.0)
|
||||
.cpu_id(client_description.core_id().0)
|
||||
.parent_cpu_id(Some(parent_cpu_id.0))
|
||||
.build();
|
||||
let helper = NyxHelper::new("/tmp/nyx_libxml2/", settings).unwrap();
|
||||
|
@ -83,7 +83,7 @@ pub fn fuzz() {
|
||||
.expect("Symbol or env BREAKPOINT not found");
|
||||
println!("Breakpoint address = {breakpoint:#x}");
|
||||
|
||||
let mut run_client = |state: Option<_>, mut mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut mgr, _client_description| {
|
||||
let args: Vec<String> = env::args().collect();
|
||||
|
||||
// The wrapped harness function, calling out to the LLVM-style harness
|
||||
|
@ -80,7 +80,7 @@ pub fn fuzz() {
|
||||
.expect("Symbol or env BREAKPOINT not found");
|
||||
println!("Breakpoint address = {breakpoint:#x}");
|
||||
|
||||
let mut run_client = |state: Option<_>, mut mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut mgr, _client_description| {
|
||||
let target_dir = env::var("TARGET_DIR").expect("TARGET_DIR env not set");
|
||||
|
||||
// Create an observation channel using the coverage map
|
||||
|
@ -43,7 +43,7 @@ pub fn fuzz() {
|
||||
let corpus_dirs = [PathBuf::from("./corpus")];
|
||||
let objective_dir = PathBuf::from("./crashes");
|
||||
|
||||
let mut run_client = |state: Option<_>, mut mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut mgr, _client_description| {
|
||||
// Initialize QEMU
|
||||
let args: Vec<String> = env::args().collect();
|
||||
|
||||
|
@ -57,7 +57,7 @@ pub fn fuzz() {
|
||||
let corpus_dirs = [PathBuf::from("./corpus")];
|
||||
let objective_dir = PathBuf::from("./crashes");
|
||||
|
||||
let mut run_client = |state: Option<_>, mut mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut mgr, _client_description| {
|
||||
// Initialize QEMU
|
||||
let args: Vec<String> = env::args().collect();
|
||||
|
||||
|
@ -47,7 +47,7 @@ pub fn fuzz() {
|
||||
let corpus_dirs = [PathBuf::from("./corpus")];
|
||||
let objective_dir = PathBuf::from("./crashes");
|
||||
|
||||
let mut run_client = |state: Option<_>, mut mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut mgr, _client_description| {
|
||||
// Initialize QEMU
|
||||
let args: Vec<String> = env::args().collect();
|
||||
|
||||
|
@ -130,7 +130,7 @@ pub extern "C" fn LLVMFuzzerRunDriver(
|
||||
|
||||
// TODO: we need to handle Atheris calls to `exit` on errors somhow.
|
||||
|
||||
let mut run_client = |state: Option<_>, mut mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut mgr, _client_description| {
|
||||
// Create an observation channel using the coverage map
|
||||
let edges = unsafe { extra_counters() };
|
||||
println!("edges: {:?}", edges);
|
||||
|
@ -137,7 +137,7 @@ pub extern "C" fn libafl_main() {
|
||||
|
||||
let monitor = MultiMonitor::new(|s| println!("{s}"));
|
||||
|
||||
let mut run_client = |state: Option<_>, mut restarting_mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut restarting_mgr, _client_description| {
|
||||
// Create an observation channel using the coverage map
|
||||
let edges_observer = HitcountsMapObserver::new(unsafe {
|
||||
StdMapObserver::from_mut_ptr("edges", EDGES_MAP.as_mut_ptr(), MAX_EDGES_FOUND)
|
||||
|
@ -8,7 +8,10 @@ use std::{env, net::SocketAddr, path::PathBuf};
|
||||
use clap::{self, Parser};
|
||||
use libafl::{
|
||||
corpus::{Corpus, InMemoryCorpus, OnDiskCorpus},
|
||||
events::{centralized::CentralizedEventManager, launcher::CentralizedLauncher, EventConfig},
|
||||
events::{
|
||||
centralized::CentralizedEventManager, launcher::CentralizedLauncher, ClientDescription,
|
||||
EventConfig,
|
||||
},
|
||||
executors::{inprocess::InProcessExecutor, ExitKind},
|
||||
feedback_or, feedback_or_fast,
|
||||
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
|
||||
@ -27,7 +30,7 @@ use libafl::{
|
||||
Error, HasMetadata,
|
||||
};
|
||||
use libafl_bolts::{
|
||||
core_affinity::{CoreId, Cores},
|
||||
core_affinity::Cores,
|
||||
rands::StdRand,
|
||||
shmem::{ShMemProvider, StdShMemProvider},
|
||||
tuples::{tuple_list, Merge},
|
||||
@ -136,12 +139,14 @@ pub extern "C" fn libafl_main() {
|
||||
|
||||
let monitor = MultiMonitor::new(|s| println!("{s}"));
|
||||
|
||||
let mut secondary_run_client = |state: Option<_>,
|
||||
let mut secondary_run_client =
|
||||
|state: Option<_>,
|
||||
mut mgr: CentralizedEventManager<_, _, _, _>,
|
||||
_core_id: CoreId| {
|
||||
_client_description: ClientDescription| {
|
||||
// Create an observation channel using the coverage map
|
||||
let edges_observer =
|
||||
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices();
|
||||
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") })
|
||||
.track_indices();
|
||||
|
||||
// Create an observation channel to keep track of the execution time
|
||||
let time_observer = TimeObserver::new("time");
|
||||
@ -243,7 +248,9 @@ pub extern "C" fn libafl_main() {
|
||||
if state.must_load_initial_inputs() {
|
||||
state
|
||||
.load_initial_inputs(&mut fuzzer, &mut executor, &mut mgr, &opt.input)
|
||||
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &opt.input));
|
||||
.unwrap_or_else(|_| {
|
||||
panic!("Failed to load initial corpus at {:?}", &opt.input)
|
||||
});
|
||||
println!("We imported {} inputs from disk.", state.corpus().count());
|
||||
}
|
||||
if !mgr.is_main() {
|
||||
|
@ -112,7 +112,7 @@ windows_alias = "unsupported"
|
||||
script_runner = "@shell"
|
||||
script = '''
|
||||
rm -rf libafl_unix_shmem_server || true
|
||||
timeout 31s ./${FUZZER_NAME}.coverage --broker-port 21337 --cores 0 --input ./corpus 2>/dev/null | tee fuzz_stdout.log || true
|
||||
timeout 31s ./${FUZZER_NAME}.coverage --broker-port 21337 --cores 0 --input ./corpus | tee fuzz_stdout.log || true
|
||||
if grep -qa "corpus: 30" fuzz_stdout.log; then
|
||||
echo "Fuzzer is working"
|
||||
else
|
||||
|
@ -56,11 +56,19 @@ struct Opt {
|
||||
short,
|
||||
long,
|
||||
value_parser = Cores::from_cmdline,
|
||||
help = "Spawn a client in each of the provided cores. Broker runs in the 0th core. 'all' to select all available cores. 'none' to run a client without binding to any core. eg: '1,2-4,6' selects the cores 1,2,3,4,6.",
|
||||
help = "Spawn clients in each of the provided cores. Broker runs in the 0th core. 'all' to select all available cores. 'none' to run a client without binding to any core. eg: '1,2-4,6' selects the cores 1,2,3,4,6.",
|
||||
name = "CORES"
|
||||
)]
|
||||
cores: Cores,
|
||||
|
||||
#[arg(
|
||||
long,
|
||||
help = "Spawn n clients on each core, this is useful if clients don't fully load a client, e.g. because they `sleep` often.",
|
||||
name = "OVERCOMMIT",
|
||||
default_value = "1"
|
||||
)]
|
||||
overcommit: usize,
|
||||
|
||||
#[arg(
|
||||
short = 'p',
|
||||
long,
|
||||
@ -137,7 +145,7 @@ pub extern "C" fn libafl_main() {
|
||||
MultiMonitor::new(|s| println!("{s}")),
|
||||
);
|
||||
|
||||
let mut run_client = |state: Option<_>, mut restarting_mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut restarting_mgr, _client_description| {
|
||||
// Create an observation channel using the coverage map
|
||||
let edges_observer =
|
||||
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices();
|
||||
@ -256,6 +264,7 @@ pub extern "C" fn libafl_main() {
|
||||
.monitor(monitor)
|
||||
.run_client(&mut run_client)
|
||||
.cores(&cores)
|
||||
.overcommit(opt.overcommit)
|
||||
.broker_port(broker_port)
|
||||
.remote_broker_addr(opt.remote_broker_addr)
|
||||
.stdout_file(Some("/dev/null"))
|
||||
|
@ -10,8 +10,9 @@ use clap::Parser;
|
||||
use libafl::{
|
||||
corpus::{Corpus, InMemoryOnDiskCorpus, OnDiskCorpus},
|
||||
events::{
|
||||
launcher::Launcher, llmp::LlmpShouldSaveState, EventConfig, EventRestarter,
|
||||
LlmpRestartingEventManager,
|
||||
launcher::{ClientDescription, Launcher},
|
||||
llmp::LlmpShouldSaveState,
|
||||
EventConfig, EventRestarter, LlmpRestartingEventManager,
|
||||
},
|
||||
executors::{inprocess::InProcessExecutor, ExitKind},
|
||||
feedback_or, feedback_or_fast,
|
||||
@ -162,7 +163,7 @@ pub extern "C" fn libafl_main() {
|
||||
|
||||
let mut run_client = |state: Option<_>,
|
||||
mut restarting_mgr: LlmpRestartingEventManager<_, _, _>,
|
||||
core_id| {
|
||||
client_description: ClientDescription| {
|
||||
// Create an observation channel using the coverage map
|
||||
let edges_observer =
|
||||
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices();
|
||||
@ -259,7 +260,7 @@ pub extern "C" fn libafl_main() {
|
||||
&mut executor,
|
||||
&mut restarting_mgr,
|
||||
&opt.input,
|
||||
&core_id,
|
||||
&client_description.core_id(),
|
||||
&cores,
|
||||
)
|
||||
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &opt.input));
|
||||
|
@ -10,7 +10,7 @@ use libafl::{
|
||||
corpus::{Corpus, InMemoryCorpus, OnDiskCorpus},
|
||||
events::{
|
||||
centralized::CentralizedEventManager, launcher::CentralizedLauncher,
|
||||
multi_machine::NodeDescriptor, EventConfig,
|
||||
multi_machine::NodeDescriptor, ClientDescription, EventConfig,
|
||||
},
|
||||
executors::{inprocess::InProcessExecutor, ExitKind},
|
||||
feedback_or, feedback_or_fast,
|
||||
@ -30,7 +30,7 @@ use libafl::{
|
||||
Error, HasMetadata,
|
||||
};
|
||||
use libafl_bolts::{
|
||||
core_affinity::{CoreId, Cores},
|
||||
core_affinity::Cores,
|
||||
rands::StdRand,
|
||||
shmem::{ShMemProvider, StdShMemProvider},
|
||||
tuples::{tuple_list, Merge},
|
||||
@ -155,12 +155,14 @@ pub extern "C" fn libafl_main() {
|
||||
|
||||
let monitor = MultiMonitor::new(|s| println!("{s}"));
|
||||
|
||||
let mut secondary_run_client = |state: Option<_>,
|
||||
let mut secondary_run_client =
|
||||
|state: Option<_>,
|
||||
mut mgr: CentralizedEventManager<_, _, _, _>,
|
||||
_core_id: CoreId| {
|
||||
_client_description: ClientDescription| {
|
||||
// Create an observation channel using the coverage map
|
||||
let edges_observer =
|
||||
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices();
|
||||
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") })
|
||||
.track_indices();
|
||||
|
||||
// Create an observation channel to keep track of the execution time
|
||||
let time_observer = TimeObserver::new("time");
|
||||
@ -262,7 +264,9 @@ pub extern "C" fn libafl_main() {
|
||||
if state.must_load_initial_inputs() {
|
||||
state
|
||||
.load_initial_inputs(&mut fuzzer, &mut executor, &mut mgr, &opt.input)
|
||||
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &opt.input));
|
||||
.unwrap_or_else(|_| {
|
||||
panic!("Failed to load initial corpus at {:?}", &opt.input)
|
||||
});
|
||||
println!("We imported {} inputs from disk.", state.corpus().count());
|
||||
}
|
||||
if !mgr.is_main() {
|
||||
@ -274,13 +278,11 @@ pub extern "C" fn libafl_main() {
|
||||
Ok(())
|
||||
};
|
||||
|
||||
let mut main_run_client = secondary_run_client.clone(); // clone it just for borrow checker
|
||||
let mut main_run_client = secondary_run_client; // clone it just for borrow checker
|
||||
|
||||
let parent_addr: Option<SocketAddr> = if let Some(parent_str) = opt.parent_addr {
|
||||
Some(SocketAddr::from_str(parent_str.as_str()).expect("Wrong parent address"))
|
||||
} else {
|
||||
None
|
||||
};
|
||||
let parent_addr: Option<SocketAddr> = opt
|
||||
.parent_addr
|
||||
.map(|parent_str| SocketAddr::from_str(parent_str.as_str()).expect("Wrong parent address"));
|
||||
|
||||
let mut node_description = NodeDescriptor::builder().parent_addr(parent_addr).build();
|
||||
|
||||
|
@ -134,7 +134,7 @@ pub extern "C" fn libafl_main() {
|
||||
// to disconnect the event coverter from the broker later
|
||||
// call detach_from_broker( port)
|
||||
|
||||
let mut run_client = |state: Option<_>, mut mgr, _core_id| {
|
||||
let mut run_client = |state: Option<_>, mut mgr, _client_description| {
|
||||
let mut bytes = vec![];
|
||||
|
||||
// The closure that we want to fuzz
|
||||
|
@ -12,66 +12,52 @@
|
||||
//! On `Unix` systems, the [`Launcher`] will use `fork` if the `fork` feature is used for `LibAFL`.
|
||||
//! Else, it will start subsequent nodes with the same commandline, and will set special `env` variables accordingly.
|
||||
|
||||
use alloc::string::ToString;
|
||||
#[cfg(feature = "std")]
|
||||
use core::time::Duration;
|
||||
use core::{
|
||||
fmt::{self, Debug, Formatter},
|
||||
num::NonZeroUsize,
|
||||
time::Duration,
|
||||
};
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use std::boxed::Box;
|
||||
#[cfg(feature = "std")]
|
||||
use std::net::SocketAddr;
|
||||
#[cfg(all(feature = "std", any(windows, not(feature = "fork"))))]
|
||||
use std::process::Stdio;
|
||||
#[cfg(all(unix, feature = "std"))]
|
||||
use std::{fs::File, os::unix::io::AsRawFd};
|
||||
use std::{net::SocketAddr, string::String};
|
||||
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use libafl_bolts::llmp::Broker;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use libafl_bolts::llmp::Brokers;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use libafl_bolts::llmp::LlmpBroker;
|
||||
#[cfg(all(unix, feature = "std"))]
|
||||
use libafl_bolts::os::dup2;
|
||||
#[cfg(all(feature = "std", any(windows, not(feature = "fork"))))]
|
||||
use libafl_bolts::os::startable_self;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use libafl_bolts::{
|
||||
core_affinity::get_core_ids,
|
||||
os::{fork, ForkResult},
|
||||
};
|
||||
use libafl_bolts::{
|
||||
core_affinity::{CoreId, Cores},
|
||||
shmem::ShMemProvider,
|
||||
tuples::{tuple_list, Handle},
|
||||
};
|
||||
#[cfg(feature = "std")]
|
||||
use serde::{Deserialize, Serialize};
|
||||
use typed_builder::TypedBuilder;
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
use {
|
||||
crate::{
|
||||
events::{centralized::CentralizedEventManager, CentralizedLlmpHook, StdLlmpEventHook},
|
||||
inputs::UsesInput,
|
||||
state::UsesState,
|
||||
},
|
||||
alloc::string::ToString,
|
||||
libafl_bolts::{
|
||||
core_affinity::get_core_ids,
|
||||
llmp::{Broker, Brokers, LlmpBroker},
|
||||
os::{fork, ForkResult},
|
||||
},
|
||||
std::boxed::Box,
|
||||
};
|
||||
#[cfg(unix)]
|
||||
use {
|
||||
libafl_bolts::os::dup2,
|
||||
std::{fs::File, os::unix::io::AsRawFd},
|
||||
};
|
||||
#[cfg(any(windows, not(feature = "fork")))]
|
||||
use {libafl_bolts::os::startable_self, std::process::Stdio};
|
||||
|
||||
use super::EventManagerHooksTuple;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use super::StdLlmpEventHook;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork", feature = "multi_machine"))]
|
||||
use crate::events::multi_machine::NodeDescriptor;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork", feature = "multi_machine"))]
|
||||
use crate::events::multi_machine::TcpMultiMachineHooks;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use crate::events::{centralized::CentralizedEventManager, CentralizedLlmpHook};
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use crate::inputs::UsesInput;
|
||||
use crate::observers::TimeObserver;
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
use crate::state::UsesState;
|
||||
#[cfg(feature = "std")]
|
||||
#[cfg(all(unix, feature = "fork", feature = "multi_machine"))]
|
||||
use crate::events::multi_machine::{NodeDescriptor, TcpMultiMachineHooks};
|
||||
use crate::{
|
||||
events::{
|
||||
llmp::{LlmpRestartingEventManager, LlmpShouldSaveState, ManagerKind, RestartingMgr},
|
||||
EventConfig,
|
||||
EventConfig, EventManagerHooksTuple,
|
||||
},
|
||||
monitors::Monitor,
|
||||
observers::TimeObserver,
|
||||
state::{HasExecutions, State},
|
||||
Error,
|
||||
};
|
||||
@ -83,15 +69,67 @@ const _AFL_LAUNCHER_CLIENT: &str = "AFL_LAUNCHER_CLIENT";
|
||||
#[cfg(all(feature = "fork", unix))]
|
||||
const LIBAFL_DEBUG_OUTPUT: &str = "LIBAFL_DEBUG_OUTPUT";
|
||||
|
||||
/// Information about this client from the launcher
|
||||
#[derive(Debug, Clone, Serialize, Deserialize)]
|
||||
pub struct ClientDescription {
|
||||
id: usize,
|
||||
overcommit_id: usize,
|
||||
core_id: CoreId,
|
||||
}
|
||||
|
||||
impl ClientDescription {
|
||||
/// Create a [`ClientDescription`]
|
||||
#[must_use]
|
||||
pub fn new(id: usize, overcommit_id: usize, core_id: CoreId) -> Self {
|
||||
Self {
|
||||
id,
|
||||
overcommit_id,
|
||||
core_id,
|
||||
}
|
||||
}
|
||||
|
||||
/// Id unique to all clients spawned by this launcher
|
||||
#[must_use]
|
||||
pub fn id(&self) -> usize {
|
||||
self.id
|
||||
}
|
||||
|
||||
/// [`CoreId`] this client is bound to
|
||||
#[must_use]
|
||||
pub fn core_id(&self) -> CoreId {
|
||||
self.core_id
|
||||
}
|
||||
|
||||
/// Incremental id unique for all clients on the same core
|
||||
#[must_use]
|
||||
pub fn overcommit_id(&self) -> usize {
|
||||
self.overcommit_id
|
||||
}
|
||||
|
||||
/// Create a string representation safe for environment variables
|
||||
#[must_use]
|
||||
pub fn to_safe_string(&self) -> String {
|
||||
format!("{}_{}_{}", self.id, self.overcommit_id, self.core_id.0)
|
||||
}
|
||||
|
||||
/// Parse the string created by [`Self::to_safe_string`].
|
||||
#[must_use]
|
||||
pub fn from_safe_string(input: &str) -> Self {
|
||||
let mut iter = input.split('_');
|
||||
let id = iter.next().unwrap().parse().unwrap();
|
||||
let overcommit_id = iter.next().unwrap().parse().unwrap();
|
||||
let core_id = iter.next().unwrap().parse::<usize>().unwrap().into();
|
||||
Self {
|
||||
id,
|
||||
overcommit_id,
|
||||
core_id,
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
/// Provides a [`Launcher`], which can be used to launch a fuzzing run on a specified list of cores
|
||||
///
|
||||
/// Will hide child output, unless the settings indicate otherwise, or the `LIBAFL_DEBUG_OUTPUT` env variable is set.
|
||||
#[cfg(feature = "std")]
|
||||
#[allow(
|
||||
clippy::type_complexity,
|
||||
missing_debug_implementations,
|
||||
clippy::ignored_unit_patterns
|
||||
)]
|
||||
#[derive(TypedBuilder)]
|
||||
pub struct Launcher<'a, CF, MT, SP> {
|
||||
/// The `ShmemProvider` to use
|
||||
@ -158,7 +196,7 @@ impl<CF, MT, SP> Debug for Launcher<'_, CF, MT, SP> {
|
||||
.field("core", &self.cores)
|
||||
.field("spawn_broker", &self.spawn_broker)
|
||||
.field("remote_broker_addr", &self.remote_broker_addr);
|
||||
#[cfg(all(unix, feature = "std"))]
|
||||
#[cfg(unix)]
|
||||
{
|
||||
dbg_struct
|
||||
.field("stdout_file", &self.stdout_file)
|
||||
@ -175,21 +213,20 @@ where
|
||||
SP: ShMemProvider,
|
||||
{
|
||||
/// Launch the broker and the clients and fuzz
|
||||
#[cfg(all(
|
||||
feature = "std",
|
||||
any(windows, not(feature = "fork"), all(unix, feature = "fork"))
|
||||
))]
|
||||
#[allow(unused_mut, clippy::match_wild_err_arm)]
|
||||
#[cfg(any(windows, not(feature = "fork"), all(unix, feature = "fork")))]
|
||||
pub fn launch<S>(&mut self) -> Result<(), Error>
|
||||
where
|
||||
S: State + HasExecutions,
|
||||
CF: FnOnce(Option<S>, LlmpRestartingEventManager<(), S, SP>, CoreId) -> Result<(), Error>,
|
||||
CF: FnOnce(
|
||||
Option<S>,
|
||||
LlmpRestartingEventManager<(), S, SP>,
|
||||
ClientDescription,
|
||||
) -> Result<(), Error>,
|
||||
{
|
||||
Self::launch_with_hooks(self, tuple_list!())
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(feature = "std")]
|
||||
impl<CF, MT, SP> Launcher<'_, CF, MT, SP>
|
||||
where
|
||||
MT: Monitor + Clone,
|
||||
@ -197,12 +234,15 @@ where
|
||||
{
|
||||
/// Launch the broker and the clients and fuzz with a user-supplied hook
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
#[allow(clippy::similar_names, clippy::too_many_lines)]
|
||||
pub fn launch_with_hooks<EMH, S>(&mut self, hooks: EMH) -> Result<(), Error>
|
||||
where
|
||||
S: State + HasExecutions,
|
||||
EMH: EventManagerHooksTuple<S> + Clone + Copy,
|
||||
CF: FnOnce(Option<S>, LlmpRestartingEventManager<EMH, S, SP>, CoreId) -> Result<(), Error>,
|
||||
CF: FnOnce(
|
||||
Option<S>,
|
||||
LlmpRestartingEventManager<EMH, S, SP>,
|
||||
ClientDescription,
|
||||
) -> Result<(), Error>,
|
||||
{
|
||||
if self.cores.ids.is_empty() {
|
||||
return Err(Error::illegal_argument(
|
||||
@ -231,10 +271,10 @@ where
|
||||
let debug_output = std::env::var(LIBAFL_DEBUG_OUTPUT).is_ok();
|
||||
|
||||
// Spawn clients
|
||||
let mut index = 0_u64;
|
||||
for (id, bind_to) in core_ids.iter().enumerate() {
|
||||
if self.cores.ids.iter().any(|&x| x == id.into()) {
|
||||
for _ in 0..self.overcommit {
|
||||
let mut index = 0_usize;
|
||||
for bind_to in core_ids {
|
||||
if self.cores.ids.iter().any(|&x| x == bind_to) {
|
||||
for overcommit_id in 0..self.overcommit {
|
||||
index += 1;
|
||||
self.shmem_provider.pre_fork()?;
|
||||
// # Safety
|
||||
@ -243,7 +283,9 @@ where
|
||||
ForkResult::Parent(child) => {
|
||||
self.shmem_provider.post_fork(false)?;
|
||||
handles.push(child.pid);
|
||||
log::info!("child spawned and bound to core {id}");
|
||||
log::info!(
|
||||
"child spawned with id {index} and bound to core {bind_to:?}"
|
||||
);
|
||||
}
|
||||
ForkResult::Child => {
|
||||
// # Safety
|
||||
@ -251,7 +293,9 @@ where
|
||||
log::info!("{:?} PostFork", unsafe { libc::getpid() });
|
||||
self.shmem_provider.post_fork(true)?;
|
||||
|
||||
std::thread::sleep(Duration::from_millis(index * self.launch_delay));
|
||||
std::thread::sleep(Duration::from_millis(
|
||||
index as u64 * self.launch_delay,
|
||||
));
|
||||
|
||||
if !debug_output {
|
||||
if let Some(file) = &self.opened_stdout_file {
|
||||
@ -264,12 +308,15 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
let client_description =
|
||||
ClientDescription::new(index, overcommit_id, bind_to);
|
||||
|
||||
// Fuzzer client. keeps retrying the connection to broker till the broker starts
|
||||
let builder = RestartingMgr::<EMH, MT, S, SP>::builder()
|
||||
.shmem_provider(self.shmem_provider.clone())
|
||||
.broker_port(self.broker_port)
|
||||
.kind(ManagerKind::Client {
|
||||
cpu_core: Some(*bind_to),
|
||||
client_description: client_description.clone(),
|
||||
})
|
||||
.configuration(self.configuration)
|
||||
.serialize_state(self.serialize_state)
|
||||
@ -277,7 +324,11 @@ where
|
||||
let builder = builder.time_ref(self.time_ref.clone());
|
||||
let (state, mgr) = builder.build().launch()?;
|
||||
|
||||
return (self.run_client.take().unwrap())(state, mgr, *bind_to);
|
||||
return (self.run_client.take().unwrap())(
|
||||
state,
|
||||
mgr,
|
||||
client_description,
|
||||
);
|
||||
}
|
||||
};
|
||||
}
|
||||
@ -329,12 +380,16 @@ where
|
||||
|
||||
/// Launch the broker and the clients and fuzz
|
||||
#[cfg(any(windows, not(feature = "fork")))]
|
||||
#[allow(unused_mut, clippy::match_wild_err_arm, clippy::too_many_lines)]
|
||||
#[allow(clippy::too_many_lines, clippy::match_wild_err_arm)]
|
||||
pub fn launch_with_hooks<EMH, S>(&mut self, hooks: EMH) -> Result<(), Error>
|
||||
where
|
||||
S: State + HasExecutions,
|
||||
EMH: EventManagerHooksTuple<S> + Clone + Copy,
|
||||
CF: FnOnce(Option<S>, LlmpRestartingEventManager<EMH, S, SP>, CoreId) -> Result<(), Error>,
|
||||
CF: FnOnce(
|
||||
Option<S>,
|
||||
LlmpRestartingEventManager<EMH, S, SP>,
|
||||
ClientDescription,
|
||||
) -> Result<(), Error>,
|
||||
{
|
||||
use libafl_bolts::core_affinity;
|
||||
|
||||
@ -342,14 +397,14 @@ where
|
||||
|
||||
let mut handles = match is_client {
|
||||
Ok(core_conf) => {
|
||||
let core_id = core_conf.parse()?;
|
||||
let client_description = ClientDescription::from_safe_string(&core_conf);
|
||||
// the actual client. do the fuzzing
|
||||
|
||||
let builder = RestartingMgr::<EMH, MT, S, SP>::builder()
|
||||
.shmem_provider(self.shmem_provider.clone())
|
||||
.broker_port(self.broker_port)
|
||||
.kind(ManagerKind::Client {
|
||||
cpu_core: Some(CoreId(core_id)),
|
||||
client_description: client_description.clone(),
|
||||
})
|
||||
.configuration(self.configuration)
|
||||
.serialize_state(self.serialize_state)
|
||||
@ -359,14 +414,13 @@ where
|
||||
|
||||
let (state, mgr) = builder.build().launch()?;
|
||||
|
||||
return (self.run_client.take().unwrap())(state, mgr, CoreId(core_id));
|
||||
return (self.run_client.take().unwrap())(state, mgr, client_description);
|
||||
}
|
||||
Err(std::env::VarError::NotPresent) => {
|
||||
// I am a broker
|
||||
// before going to the broker loop, spawn n clients
|
||||
|
||||
let core_ids = core_affinity::get_core_ids().unwrap();
|
||||
let num_cores = core_ids.len();
|
||||
let mut handles = vec![];
|
||||
|
||||
log::info!("spawning on cores: {:?}", self.cores);
|
||||
@ -393,10 +447,13 @@ where
|
||||
}
|
||||
}
|
||||
//spawn clients
|
||||
for (id, _) in core_ids.iter().enumerate().take(num_cores) {
|
||||
if self.cores.ids.iter().any(|&x| x == id.into()) {
|
||||
for _ in 0..self.overcommit {
|
||||
let mut index = 0;
|
||||
for core_id in core_ids {
|
||||
if self.cores.ids.iter().any(|&x| x == core_id) {
|
||||
for overcommit_i in 0..self.overcommit {
|
||||
index += 1;
|
||||
// Forward own stdio to child processes, if requested by user
|
||||
#[allow(unused_mut)]
|
||||
let (mut stdout, mut stderr) = (Stdio::null(), Stdio::null());
|
||||
#[cfg(unix)]
|
||||
{
|
||||
@ -407,10 +464,15 @@ where
|
||||
}
|
||||
|
||||
std::thread::sleep(Duration::from_millis(
|
||||
id as u64 * self.launch_delay,
|
||||
core_id.0 as u64 * self.launch_delay,
|
||||
));
|
||||
|
||||
std::env::set_var(_AFL_LAUNCHER_CLIENT, id.to_string());
|
||||
let client_description =
|
||||
ClientDescription::new(index, overcommit_i, core_id);
|
||||
std::env::set_var(
|
||||
_AFL_LAUNCHER_CLIENT,
|
||||
client_description.to_safe_string(),
|
||||
);
|
||||
let mut child = startable_self()?;
|
||||
let child = (if debug_output {
|
||||
&mut child
|
||||
@ -476,9 +538,8 @@ where
|
||||
///
|
||||
/// Provides a Launcher, which can be used to launch a fuzzing run on a specified list of cores with a single main and multiple secondary nodes
|
||||
/// This is for centralized, the 4th argument of the closure should mean if this is the main node.
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
#[derive(TypedBuilder)]
|
||||
#[allow(clippy::type_complexity, missing_debug_implementations)]
|
||||
pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
|
||||
/// The `ShmemProvider` to use
|
||||
shmem_provider: SP,
|
||||
@ -506,6 +567,9 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
|
||||
time_obs: Option<Handle<TimeObserver>>,
|
||||
/// The list of cores to run on
|
||||
cores: &'a Cores,
|
||||
/// The number of clients to spawn on each core
|
||||
#[builder(default = 1)]
|
||||
overcommit: usize,
|
||||
/// A file name to write all client output to
|
||||
#[builder(default = None)]
|
||||
stdout_file: Option<&'a str>,
|
||||
@ -513,7 +577,7 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
|
||||
#[builder(default = 10)]
|
||||
launch_delay: u64,
|
||||
/// The actual, opened, `stdout_file` - so that we keep it open until the end
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
#[builder(setter(skip), default = None)]
|
||||
opened_stdout_file: Option<File>,
|
||||
/// A file name to write all client stderr output to. If not specified, output is sent to
|
||||
@ -521,7 +585,7 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
|
||||
#[builder(default = None)]
|
||||
stderr_file: Option<&'a str>,
|
||||
/// The actual, opened, `stdout_file` - so that we keep it open until the end
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
#[builder(setter(skip), default = None)]
|
||||
opened_stderr_file: Option<File>,
|
||||
/// The `ip:port` address of another broker to connect our new broker to for multi-machine
|
||||
@ -541,13 +605,14 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
|
||||
serialize_state: LlmpShouldSaveState,
|
||||
}
|
||||
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
impl<CF, MF, MT, SP> Debug for CentralizedLauncher<'_, CF, MF, MT, SP> {
|
||||
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
|
||||
f.debug_struct("Launcher")
|
||||
.field("configuration", &self.configuration)
|
||||
.field("broker_port", &self.broker_port)
|
||||
.field("core", &self.cores)
|
||||
.field("cores", &self.cores)
|
||||
.field("overcommit", &self.overcommit)
|
||||
.field("spawn_broker", &self.spawn_broker)
|
||||
.field("remote_broker_addr", &self.remote_broker_addr)
|
||||
.field("stdout_file", &self.stdout_file)
|
||||
@ -559,7 +624,7 @@ impl<CF, MF, MT, SP> Debug for CentralizedLauncher<'_, CF, MF, MT, SP> {
|
||||
/// The standard inner manager of centralized
|
||||
pub type StdCentralizedInnerMgr<S, SP> = LlmpRestartingEventManager<(), S, SP>;
|
||||
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
impl<CF, MF, MT, SP> CentralizedLauncher<'_, CF, MF, MT, SP>
|
||||
where
|
||||
MT: Monitor + Clone + 'static,
|
||||
@ -573,23 +638,22 @@ where
|
||||
CF: FnOnce(
|
||||
Option<S>,
|
||||
CentralizedEventManager<StdCentralizedInnerMgr<S, SP>, (), S, SP>,
|
||||
CoreId,
|
||||
ClientDescription,
|
||||
) -> Result<(), Error>,
|
||||
MF: FnOnce(
|
||||
Option<S>,
|
||||
CentralizedEventManager<StdCentralizedInnerMgr<S, SP>, (), S, SP>,
|
||||
CoreId,
|
||||
ClientDescription,
|
||||
) -> Result<(), Error>,
|
||||
{
|
||||
let restarting_mgr_builder = |centralized_launcher: &Self, core_to_bind: CoreId| {
|
||||
let restarting_mgr_builder =
|
||||
|centralized_launcher: &Self, client_description: ClientDescription| {
|
||||
// Fuzzer client. keeps retrying the connection to broker till the broker starts
|
||||
let builder = RestartingMgr::<(), MT, S, SP>::builder()
|
||||
.always_interesting(centralized_launcher.always_interesting)
|
||||
.shmem_provider(centralized_launcher.shmem_provider.clone())
|
||||
.broker_port(centralized_launcher.broker_port)
|
||||
.kind(ManagerKind::Client {
|
||||
cpu_core: Some(core_to_bind),
|
||||
})
|
||||
.kind(ManagerKind::Client { client_description })
|
||||
.configuration(centralized_launcher.configuration)
|
||||
.serialize_state(centralized_launcher.serialize_state)
|
||||
.hooks(tuple_list!());
|
||||
@ -603,7 +667,7 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
#[cfg(all(unix, feature = "std", feature = "fork"))]
|
||||
#[cfg(all(unix, feature = "fork"))]
|
||||
impl<CF, MF, MT, SP> CentralizedLauncher<'_, CF, MF, MT, SP>
|
||||
where
|
||||
MT: Monitor + Clone + 'static,
|
||||
@ -612,7 +676,6 @@ where
|
||||
/// Launch a Centralized-based fuzzer.
|
||||
/// - `main_inner_mgr_builder` will be called to build the inner manager of the main node.
|
||||
/// - `secondary_inner_mgr_builder` will be called to build the inner manager of the secondary nodes.
|
||||
#[allow(clippy::similar_names, clippy::too_many_lines)]
|
||||
pub fn launch_generic<EM, EMB, S>(
|
||||
&mut self,
|
||||
main_inner_mgr_builder: EMB,
|
||||
@ -621,13 +684,17 @@ where
|
||||
where
|
||||
S: State,
|
||||
S::Input: Send + Sync + 'static,
|
||||
CF: FnOnce(Option<S>, CentralizedEventManager<EM, (), S, SP>, CoreId) -> Result<(), Error>,
|
||||
CF: FnOnce(
|
||||
Option<S>,
|
||||
CentralizedEventManager<EM, (), S, SP>,
|
||||
ClientDescription,
|
||||
) -> Result<(), Error>,
|
||||
EM: UsesState<State = S>,
|
||||
EMB: FnOnce(&Self, CoreId) -> Result<(Option<S>, EM), Error>,
|
||||
EMB: FnOnce(&Self, ClientDescription) -> Result<(Option<S>, EM), Error>,
|
||||
MF: FnOnce(
|
||||
Option<S>,
|
||||
CentralizedEventManager<EM, (), S, SP>, // No broker_hooks for centralized EM
|
||||
CoreId,
|
||||
ClientDescription,
|
||||
) -> Result<(), Error>,
|
||||
<<EM as UsesState>::State as UsesInput>::Input: Send + Sync + 'static,
|
||||
{
|
||||
@ -647,7 +714,6 @@ where
|
||||
}
|
||||
|
||||
let core_ids = get_core_ids().unwrap();
|
||||
let num_cores = core_ids.len();
|
||||
let mut handles = vec![];
|
||||
|
||||
log::debug!("spawning on cores: {:?}", self.cores);
|
||||
@ -662,23 +728,27 @@ where
|
||||
let debug_output = std::env::var(LIBAFL_DEBUG_OUTPUT).is_ok();
|
||||
|
||||
// Spawn clients
|
||||
let mut index = 0_u64;
|
||||
for (id, bind_to) in core_ids.iter().enumerate().take(num_cores) {
|
||||
if self.cores.ids.iter().any(|&x| x == id.into()) {
|
||||
let mut index = 0_usize;
|
||||
for bind_to in core_ids {
|
||||
if self.cores.ids.iter().any(|&x| x == bind_to) {
|
||||
for overcommit_id in 0..self.overcommit {
|
||||
index += 1;
|
||||
self.shmem_provider.pre_fork()?;
|
||||
match unsafe { fork() }? {
|
||||
ForkResult::Parent(child) => {
|
||||
self.shmem_provider.post_fork(false)?;
|
||||
handles.push(child.pid);
|
||||
#[cfg(feature = "std")]
|
||||
log::info!("child spawned and bound to core {id}");
|
||||
log::info!(
|
||||
"child with client id {index} spawned and bound to core {bind_to:?}"
|
||||
);
|
||||
}
|
||||
ForkResult::Child => {
|
||||
log::info!("{:?} PostFork", unsafe { libc::getpid() });
|
||||
self.shmem_provider.post_fork(true)?;
|
||||
|
||||
std::thread::sleep(Duration::from_millis(index * self.launch_delay));
|
||||
std::thread::sleep(Duration::from_millis(
|
||||
index as u64 * self.launch_delay,
|
||||
));
|
||||
|
||||
if !debug_output {
|
||||
if let Some(file) = &self.opened_stdout_file {
|
||||
@ -691,11 +761,16 @@ where
|
||||
}
|
||||
}
|
||||
|
||||
let client_description =
|
||||
ClientDescription::new(index, overcommit_id, bind_to);
|
||||
|
||||
if index == 1 {
|
||||
// Main client
|
||||
log::debug!("Running main client on PID {}", std::process::id());
|
||||
let (state, mgr) =
|
||||
main_inner_mgr_builder.take().unwrap()(self, *bind_to)?;
|
||||
let (state, mgr) = main_inner_mgr_builder.take().unwrap()(
|
||||
self,
|
||||
client_description.clone(),
|
||||
)?;
|
||||
|
||||
let mut centralized_event_manager_builder =
|
||||
CentralizedEventManager::builder();
|
||||
@ -711,13 +786,22 @@ where
|
||||
self.time_obs.clone(),
|
||||
)?;
|
||||
|
||||
self.main_run_client.take().unwrap()(state, c_mgr, *bind_to)?;
|
||||
self.main_run_client.take().unwrap()(
|
||||
state,
|
||||
c_mgr,
|
||||
client_description,
|
||||
)?;
|
||||
Err(Error::shutting_down())
|
||||
} else {
|
||||
// Secondary clients
|
||||
log::debug!("Running secondary client on PID {}", std::process::id());
|
||||
let (state, mgr) =
|
||||
secondary_inner_mgr_builder.take().unwrap()(self, *bind_to)?;
|
||||
log::debug!(
|
||||
"Running secondary client on PID {}",
|
||||
std::process::id()
|
||||
);
|
||||
let (state, mgr) = secondary_inner_mgr_builder.take().unwrap()(
|
||||
self,
|
||||
client_description.clone(),
|
||||
)?;
|
||||
|
||||
let centralized_builder = CentralizedEventManager::builder();
|
||||
|
||||
@ -729,13 +813,18 @@ where
|
||||
self.time_obs.clone(),
|
||||
)?;
|
||||
|
||||
self.secondary_run_client.take().unwrap()(state, c_mgr, *bind_to)?;
|
||||
self.secondary_run_client.take().unwrap()(
|
||||
state,
|
||||
c_mgr,
|
||||
client_description,
|
||||
)?;
|
||||
Err(Error::shutting_down())
|
||||
}
|
||||
}?,
|
||||
};
|
||||
}
|
||||
}
|
||||
}
|
||||
|
||||
// Create this after forks, to avoid problems with tokio runtime
|
||||
|
||||
|
@ -39,9 +39,9 @@ use crate::events::EVENTMGR_SIGHANDLER_STATE;
|
||||
use crate::events::{AdaptiveSerializer, CustomBufEventResult, HasCustomBufHandlers};
|
||||
use crate::{
|
||||
events::{
|
||||
Event, EventConfig, EventFirer, EventManager, EventManagerHooksTuple, EventManagerId,
|
||||
EventProcessor, EventRestarter, HasEventManagerId, LlmpEventManager, LlmpShouldSaveState,
|
||||
ProgressReporter, StdLlmpEventHook,
|
||||
launcher::ClientDescription, Event, EventConfig, EventFirer, EventManager,
|
||||
EventManagerHooksTuple, EventManagerId, EventProcessor, EventRestarter, HasEventManagerId,
|
||||
LlmpEventManager, LlmpShouldSaveState, ProgressReporter, StdLlmpEventHook,
|
||||
},
|
||||
executors::{Executor, HasObservers},
|
||||
fuzzer::{Evaluator, EvaluatorObservers, ExecutionProcessor},
|
||||
@ -322,14 +322,14 @@ where
|
||||
|
||||
/// The kind of manager we're creating right now
|
||||
#[cfg(feature = "std")]
|
||||
#[derive(Debug, Clone, Copy)]
|
||||
#[derive(Debug, Clone)]
|
||||
pub enum ManagerKind {
|
||||
/// Any kind will do
|
||||
Any,
|
||||
/// A client, getting messages from a local broker.
|
||||
Client {
|
||||
/// The CPU core ID of this client
|
||||
cpu_core: Option<CoreId>,
|
||||
/// The client description
|
||||
client_description: ClientDescription,
|
||||
},
|
||||
/// An [`LlmpBroker`], forwarding the packets of local clients.
|
||||
Broker,
|
||||
@ -481,7 +481,7 @@ where
|
||||
Err(Error::shutting_down())
|
||||
};
|
||||
// We get here if we are on Unix, or we are a broker on Windows (or without forks).
|
||||
let (mgr, core_id) = match self.kind {
|
||||
let (mgr, core_id) = match &self.kind {
|
||||
ManagerKind::Any => {
|
||||
let connection =
|
||||
LlmpConnection::on_port(self.shmem_provider.clone(), self.broker_port)?;
|
||||
@ -528,7 +528,7 @@ where
|
||||
broker_things(broker, self.remote_broker_addr)?;
|
||||
unreachable!("The broker may never return normally, only on errors or when shutting down.");
|
||||
}
|
||||
ManagerKind::Client { cpu_core } => {
|
||||
ManagerKind::Client { client_description } => {
|
||||
// We are a client
|
||||
let mgr = LlmpEventManager::builder()
|
||||
.always_interesting(self.always_interesting)
|
||||
@ -540,7 +540,7 @@ where
|
||||
self.time_ref.clone(),
|
||||
)?;
|
||||
|
||||
(mgr, cpu_core)
|
||||
(mgr, Some(client_description.core_id()))
|
||||
}
|
||||
};
|
||||
|
||||
|
@ -40,7 +40,6 @@ use libafl_bolts::os::CTRL_C_EXIT;
|
||||
use libafl_bolts::{
|
||||
current_time,
|
||||
tuples::{Handle, MatchNameRef},
|
||||
ClientId,
|
||||
};
|
||||
use serde::{Deserialize, Serialize};
|
||||
#[cfg(feature = "std")]
|
||||
@ -288,7 +287,7 @@ where
|
||||
/// The time of generation of the event
|
||||
time: Duration,
|
||||
/// The original sender if, if forwarded
|
||||
forward_id: Option<ClientId>,
|
||||
forward_id: Option<libafl_bolts::ClientId>,
|
||||
/// The (multi-machine) node from which the tc is from, if any
|
||||
#[cfg(all(unix, feature = "std", feature = "multi_machine"))]
|
||||
node_id: Option<NodeId>,
|
||||
|
Loading…
x
Reference in New Issue
Block a user