Make Launcher use ClientDescription instead of CoreId (#2676)

* launcher now uses client_id instead of core_id

* adding overcommit to an example fuzzer

* Replace addr_of with &raw across the codebase (#2669)

* Replace addr_of with &raw across the codebase

* fix fixes

* more fix

* undo clang fmt?

* oops

* fix?

* allocator fix

* more fix

* more more

* more docs

* more fix

* mas mas mas

* hm

* more

* fix Frida

* needed

* more error

* qemu

* Introduce workspace (again) (#2673)

* Trying to redo workspace deps again after #2672

* unused

* clippy

* Replace addr_of with &raw across the codebase (#2669)

* Replace addr_of with &raw across the codebase

* fix fixes

* more fix

* undo clang fmt?

* oops

* fix?

* allocator fix

* more fix

* more more

* more docs

* more fix

* mas mas mas

* hm

* more

* fix Frida

* needed

* more error

* qemu

* Introduce workspace (again) (#2673)

* Trying to redo workspace deps again after #2672

* unused

* clippy

* fixing formatting issues

* cloning values to make borrow checker happy

* simplifying cfg constraints, removing excessive clippy allows

* printing clang version that is used to find inconsistencies between CI and local formatting

* some fixes according to the CI

* Specifying types

* improved logging for formatter

* more attempts at logging for the CI formatting

* fixing setting LLVM version in formatting in CI

* fixing cippy allows

* renaming launcher's ClientID to ClientDescription

* Lower capped RAND generators (#2671)

* Lower capped rand generators

* Updated all references to RAND generators

* Formatting updates

* New RAND bytes generator constructor

* Revert "Updated all references to RAND generators"

This reverts commit 9daad894b25ec3867daf93c4fe67c03abec1d8c6.

* Revert "Formatting updates"

This reverts commit ff2a61a366c48b3f313878f62409e51b1e1ed663.

* cargo nightly format

* Added must_use to with_min_size

* fix error '#' is not followed by a macro parameter (#2678)

* Use version.workspace (#2682)

* LibAFL_QEMU: Don't return a generic Address from Register reads (#2681)

* LibAFL_QEMU: Make ReadReg always return GuestReg type

* Don't return a generic address

* fix fuzzers

* fix mips

* Add DrCovReader to read DrCov files and DrCov dumper and merge utils (#2680)

* Add DrCov Reader

* Removed libafl_jumper deps

* Fix DrCovWriter, add dump_drcov_addrs

* Taplo

* Move frida from usize to u64

* DrCov usize=>u64

* Better error print

* More u64

* ?

* debug

* clippy

* clippy

* Add Merge option to DrCovReader

* Add drcov_merge tool

* Move folder around

* DrCov

* More assert

* fmt

* Move around

* Fix print

* Add option to read multiple files/full folders

* Fix build_all_fuzzers.sh for local runs (#2686)

* Add Intel PT tracing support (#2471)

* WIP: IntelPT qemu systemmode

* use perf-event-open-sys instead of bindgen

* intelPT Add enable and disable tracing, add test

* Use static_assertions crate

* Fix volatiles, finish test

* Add Intel PT availability check

* Use LibAFL errors in Result

* Improve filtering

* Add KVM pt_mode check

* move static_assertions use

* Check for perf_event_open support

* Add (empty) IntelPT module

* Add IntelPTModule POC

* partial ideas to implement intel pt

* forgot smth

* trace decoding draft

* add libipt decoder

* use cpuid instead of reading /proc/cpuinfo

* investigating nondeterministic behaviour

* intel_pt module add thread creation hook

* Fully identify deps versions

Cargo docs: Although it looks like a specific version of the crate, it actually specifies a range of versions and allows SemVer compatible updates

* Move mem image to module, output to file for debug

* fixup! Use static_assertions crate

* Exclude host kernel from traces

* Bump libipt-rs

* Callback to get memory as an alterantive to image

* WIP Add bootloader fuzzer example

* Split availability check: add availability_with_qemu

* Move IntelPT to observer

* Improve test docs

* Clippy happy now

* Taplo happy now

* Add IntelPTObserver boilerplate

* Hook instead of Observer

* Clippy & Taplo

* Add psb_freq setting

* Extremely bad and dirty babyfuzzer stealing

* Use thread local cell instead of mutex

* Try a trace diff based naive feedback

* fix perf aux buffer wrap handling

* Use f64 for feedback score

* Fix clippy for cargo test

* Add config format tests

* WIP intelpt babyfuzzer with fork

* Fix not wrapped tail offset in split buffer

* Baby PT with raw traces diff working

* Cache nr_filters

* Use Lazy_lock for perf_type

* Add baby_fuzzer_intel_pt

* restore baby fuzzer

* baby_fuzzer with block decoder

* instruction decoder instead of block

* Fix after upstream merge

* OwnedRefMut instead of Cow

* Read mem directly instead of going through files

* Fix cache lifetime and tail update

* clippy

* Taplo

* Compile caps only on linux

* clippy

* Fail compilation on unsupported OSes

* Add baby_fuzzer_intel_pt to CI

* Cleanup

* Move intel pt + linux check

* fix baby pt

* rollback forkexecutor

* Remove unused dep

* Cleanup

* Lints

* Compute an edge id instead of using only block ip

* Binary only intelPT POC

* put linux specific code behind target_os=linux

* Clippy & Taplo

* fix CI

* Disable relocation

* No unwrap in decode

* No expect in decode

* Better logging, smaller aux buffer

* add IntelPTBuilder

* some lints

* Add exclude_hv config

* Per CPU tracing and inheritance

* Parametrize buffer size

* Try not to break commandExecutor API pt.1

* Try not to break commandExecutor API pt.2

* Try not to break commandExecutor API pt.3

* fix baby PT

* Support on_crash & on_timeout callbacks for libafl_qemu modules (#2620)

* support (unsafe) on_crash / on_timeout callbacks for modules

* use libc types in bindgen

* Move common code to bolts

* Cleanup

* Revert changes to backtrace_baby_fuzzers/command_executor

* Move intel_pt in one file

* Use workspace deps

* add nr_addr_filter fallback

* Cleaning

* Improve decode

* Clippy

* Improve errors and docs

* Impl from<PtError> for libafl::Error

* Merge hooks

* Docs

* Clean command executor

* fix baby PT

* fix baby PT warnings

* decoder fills the map with no vec alloc

* WIP command executor intel PT

* filter_map() instead of filter().map()

* fix docs

* fix windows?

* Baby lints

* Small cleanings

* Use personality to disable ASLR at runtime

* Fix nix dep

* Use prc-maps in babyfuzzer

* working ET_DYN elf

* Cleanup Cargo.toml

* Clean command executor

* introduce PtraceCommandConfigurator

* Fix clippy & taplo

* input via stdin

* libipt as workspace dep

* Check kernel version

* support Arg input location

* Reorder stuff

* File input

* timeout support for PtraceExec

* Lints

* Move out method not needing self form IntelPT

* unimplemented

* Lints

* Move intel_pt_baby_fuzzer

* Move intel_pt_command_executor

* Document the need for smp_rmb

* Better comment

* Readme and Makefile.toml instead of build.rs

* Move out from libafl_bolts to libafl_intelpt

* Fix hooks

* (Almost) fix intel_pt command exec

* fix intel_pt command exec debug

* Fix baby_fuzzer

* &raw over addr_of!

* cfg(target_os = "linux")

* bolts Cargo.toml leftover

* minimum wage README.md

* extract join_split_trace from decode

* extract decode_block from decode

* add 1 to `previous_block_ip` to avoid that all the recursive basic blocks map to 0

* More generic hook

* fix windows

* Update CI, fmt

* No bitbybit

* Fix docker?

* Fix Apple silicon?

* Use old libipt from crates.io

---------

Co-authored-by: Romain Malmain <romain.malmain@pm.me>
Co-authored-by: Dominik Maier <domenukk@gmail.com>

* libafl-fuzz: introduce nyx_mode (#2503)

* add nyx_mode

* fix frida ci?

* damn clippy

* clippy

* LibAFL: Remove `tui_monitor` from default features (#2685)

* No Usermode default

* no tui

* gg

* try fix CI

* fmt

---------

Co-authored-by: Dominik Maier <dmnk@google.com>

* Actually make ConstMapObserver work, introduce `nonnull_raw_mut` macro (#2687)

* Actually make ConstMapObserver work

* fixes

* does that work?

* mas

* Feature: libafl-fuzzfuzzbench (#2689)

* fuzzbench

* clippy

* fmt

* fix unicorn CI?

* Move bitfields to bitbybit (#2688)

* move to bitbybit

* Restore bitbybit dependent code

* Clippy

* Fix NautilusContext::from_file for python files (#2690)

* Bump to 0.14.0 (#2692)

* Fix versions in libafl and libafl_intelpt for crates.io (#2693)

* Fix versions in libafl and libafl_intelpt for crates

* Add libafl_intelpt to publish

* StdMOptMutator:🆕 remove unused type parameter (#2695)

`I` is unused in `::new` and thus requires callers to explicitly specify
any type as it can't be determined by type inference.

Clippy's `extra_unused_type_parameters` should pick this up, but is
tuned a bit too conservative in order to avoid false positives AFAICT.

* Move test_harness from source directory to OUT_DIR (#2694)

* remove test_harness from source directory

* fmt

* Add package.metadata.docs.rs for libafl_intelpt (#2696)

* libafl-fuzz: fix cmplog running on inputs more than once (#2697)

* libafl-fuzz: fix cmplog running on inputs more than once

* fmt

* fix afl++ cmplog header

* update to latest afl stable commit

* Libafl workspace internal deps in workspace Cargo.toml (#2691)

* Add internal deps to workspace

* libafl: use workspace internal deps

* libafl_bolts: use workspace internal deps

* 0.14.0

* use workspace internal deps

* Fix tui monitor for example fuzzers (#2699)

* Fix tui monitor for example fuzzers

* New clippy lint

* fix

* Update pyo3-build-config requirement from 0.22.3 to 0.23.1 (#2701)

Updates the requirements on [pyo3-build-config](https://github.com/pyo3/pyo3) to permit the latest version.
- [Release notes](https://github.com/pyo3/pyo3/releases)
- [Changelog](https://github.com/PyO3/pyo3/blob/main/CHANGELOG.md)
- [Commits](https://github.com/pyo3/pyo3/compare/v0.22.3...v0.23.1)

---
updated-dependencies:
- dependency-name: pyo3-build-config
  dependency-type: direct:production
...

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>

* bolts: fix build for tiers 3 platforms. (#2700)

cater to platforms knowingly support this feature instead.

* Pre init module hooks (#2704)

* differenciate pre qemu init and post qemu init hooks

* api breakage: Emulator::new_with_qemu is not public anymore.

* Fix edge module generators (#2702)

* fix generators

* fix metadata removal for ExecutionCountRestartHelper (#2705)

* Ignore pyo3 update (#2709)

* libafl-fuzz: feature-flag nyx mode (#2712)

* Bump ctor dependency to make nightly compile again (#2713)

* Batched timeout doc (#2716)

* timeout doc

* clp

* FMT

* More batched timeout doc (#2717)

* timeout doc

* clp

* FMT

* more

* fixing an overexited cast

* renaming variables

* removing unnecessary brackets

* fixing imports

* fixing imports

* renaming more variables

* even more variable renaming

* removing duplicate clap short options

* reverting mistaken variable renaming

* comparing the actual cores instead of an enumeration index

---------

Signed-off-by: dependabot[bot] <support@github.com>
Co-authored-by: Dominik Maier <domenukk@gmail.com>
Co-authored-by: Subhojeet Mukherjee, PhD <57270300+CowBoy4mH3LL@users.noreply.github.com>
Co-authored-by: jejuisland87654 <jejuisland87654@gmail.com>
Co-authored-by: Marco C. <46560192+Marcondiro@users.noreply.github.com>
Co-authored-by: Dongjia "toka" Zhang <tokazerkje@outlook.com>
Co-authored-by: Romain Malmain <romain.malmain@pm.me>
Co-authored-by: Aarnav <aarnav@srlabs.de>
Co-authored-by: Dominik Maier <dmnk@google.com>
Co-authored-by: Andrea Fioraldi <andreafioraldi@gmail.com>
Co-authored-by: Mrmaxmeier <3913977+Mrmaxmeier@users.noreply.github.com>
Co-authored-by: Sharad Khanna <sharad@mineo333.dev>
Co-authored-by: dependabot[bot] <49699333+dependabot[bot]@users.noreply.github.com>
Co-authored-by: David CARLIER <devnexen@gmail.com>
Co-authored-by: Henry Chu <henrytech@outlook.com>
This commit is contained in:
Valentin Huber 2024-11-29 19:36:13 +01:00 committed by GitHub
parent 0d0bbf0c5d
commit bdde109867
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
27 changed files with 764 additions and 586 deletions

View File

@ -194,7 +194,7 @@ jobs:
cargo-fmt: cargo-fmt:
runs-on: ubuntu-24.04 runs-on: ubuntu-24.04
env: env:
MAIN_LLVM_VERSION: 19 MAIN_LLVM_VERSION: 19
steps: steps:
- uses: actions/checkout@v4 - uses: actions/checkout@v4
- uses: ./.github/workflows/ubuntu-prepare - uses: ./.github/workflows/ubuntu-prepare

View File

@ -5,7 +5,9 @@ use std::{path::PathBuf, ptr::null};
use frida_gum::Gum; use frida_gum::Gum;
use libafl::{ use libafl::{
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus}, corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus},
events::{launcher::Launcher, llmp::LlmpRestartingEventManager, EventConfig}, events::{
launcher::Launcher, llmp::LlmpRestartingEventManager, ClientDescription, EventConfig,
},
executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor}, executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor},
feedback_or, feedback_or_fast, feedback_or, feedback_or_fast,
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback}, feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
@ -93,13 +95,17 @@ unsafe fn fuzz(
let shmem_provider = StdShMemProvider::new()?; let shmem_provider = StdShMemProvider::new()?;
let mut run_client = |state: Option<_>, mgr: LlmpRestartingEventManager<_, _, _>, core_id| { let mut run_client = |state: Option<_>,
mgr: LlmpRestartingEventManager<_, _, _>,
client_description: ClientDescription| {
// The restarting state will spawn the same process again as child, then restarted it each time it crashes. // The restarting state will spawn the same process again as child, then restarted it each time it crashes.
// println!("{:?}", mgr.mgr_id()); // println!("{:?}", mgr.mgr_id());
if options.asan && options.asan_cores.contains(core_id) { if options.asan && options.asan_cores.contains(client_description.core_id()) {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -222,9 +228,11 @@ unsafe fn fuzz(
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} else if options.cmplog && options.cmplog_cores.contains(core_id) { } else if options.cmplog && options.cmplog_cores.contains(client_description.core_id()) {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -356,9 +364,11 @@ unsafe fn fuzz(
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} else { } else {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -473,7 +483,7 @@ unsafe fn fuzz(
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} }
}; };

View File

@ -5,7 +5,9 @@ use std::path::PathBuf;
use frida_gum::Gum; use frida_gum::Gum;
use libafl::{ use libafl::{
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus}, corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus},
events::{launcher::Launcher, llmp::LlmpRestartingEventManager, EventConfig}, events::{
launcher::Launcher, llmp::LlmpRestartingEventManager, ClientDescription, EventConfig,
},
executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor}, executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor},
feedback_or, feedback_or_fast, feedback_or, feedback_or_fast,
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback}, feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
@ -73,7 +75,9 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
let shmem_provider = StdShMemProvider::new()?; let shmem_provider = StdShMemProvider::new()?;
let mut run_client = |state: Option<_>, mgr: LlmpRestartingEventManager<_, _, _>, core_id| { let mut run_client = |state: Option<_>,
mgr: LlmpRestartingEventManager<_, _, _>,
client_description: ClientDescription| {
// The restarting state will spawn the same process again as child, then restarted it each time it crashes. // The restarting state will spawn the same process again as child, then restarted it each time it crashes.
// println!("{:?}", mgr.mgr_id()); // println!("{:?}", mgr.mgr_id());
@ -90,8 +94,10 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
ExitKind::Ok ExitKind::Ok
}; };
if options.asan && options.asan_cores.contains(core_id) { if options.asan && options.asan_cores.contains(client_description.core_id()) {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -214,9 +220,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} else if options.cmplog && options.cmplog_cores.contains(core_id) { } else if options.cmplog && options.cmplog_cores.contains(client_description.core_id()) {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -349,9 +357,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} else { } else {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -466,7 +476,7 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} }
}; };

View File

@ -22,7 +22,9 @@ use std::path::PathBuf;
use frida_gum::Gum; use frida_gum::Gum;
use libafl::{ use libafl::{
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus}, corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus},
events::{launcher::Launcher, llmp::LlmpRestartingEventManager, EventConfig}, events::{
launcher::Launcher, llmp::LlmpRestartingEventManager, ClientDescription, EventConfig,
},
executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor}, executors::{inprocess::InProcessExecutor, ExitKind, ShadowExecutor},
feedback_and_fast, feedback_or, feedback_or_fast, feedback_and_fast, feedback_or, feedback_or_fast,
feedbacks::{ConstFeedback, CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback}, feedbacks::{ConstFeedback, CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
@ -82,7 +84,9 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
let shmem_provider = StdShMemProvider::new()?; let shmem_provider = StdShMemProvider::new()?;
let mut run_client = |state: Option<_>, mgr: LlmpRestartingEventManager<_, _, _>, core_id| { let mut run_client = |state: Option<_>,
mgr: LlmpRestartingEventManager<_, _, _>,
client_description: ClientDescription| {
// The restarting state will spawn the same process again as child, then restarted it each time it crashes. // The restarting state will spawn the same process again as child, then restarted it each time it crashes.
// println!("{:?}", mgr.mgr_id()); // println!("{:?}", mgr.mgr_id());
@ -99,8 +103,10 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
ExitKind::Ok ExitKind::Ok
}; };
if options.asan && options.asan_cores.contains(core_id) { if options.asan && options.asan_cores.contains(client_description.core_id()) {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -212,9 +218,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} else if options.cmplog && options.cmplog_cores.contains(core_id) { } else if options.cmplog && options.cmplog_cores.contains(client_description.core_id()) {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -340,9 +348,11 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?; fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} else { } else {
(|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, _core_id| { (|state: Option<_>,
mut mgr: LlmpRestartingEventManager<_, _, _>,
_client_description| {
let gum = Gum::obtain(); let gum = Gum::obtain();
let coverage = CoverageRuntime::new(); let coverage = CoverageRuntime::new();
@ -454,7 +464,7 @@ unsafe fn fuzz(options: &FuzzerOptions) -> Result<(), Error> {
.unwrap(); .unwrap();
Ok(()) Ok(())
})(state, mgr, core_id) })(state, mgr, client_description)
} }
}; };

View File

@ -8,7 +8,10 @@ use std::{env, fmt::Write, fs::DirEntry, io, path::PathBuf, process};
use clap::{builder::Str, Parser}; use clap::{builder::Str, Parser};
use libafl::{ use libafl::{
corpus::{Corpus, NopCorpus}, corpus::{Corpus, NopCorpus},
events::{launcher::Launcher, EventConfig, EventRestarter, LlmpRestartingEventManager}, events::{
launcher::Launcher, ClientDescription, EventConfig, EventRestarter,
LlmpRestartingEventManager,
},
executors::ExitKind, executors::ExitKind,
fuzzer::StdFuzzer, fuzzer::StdFuzzer,
inputs::{BytesInput, HasTargetBytes}, inputs::{BytesInput, HasTargetBytes},
@ -191,87 +194,89 @@ pub fn fuzz() {
ExitKind::Ok ExitKind::Ok
}; };
let mut run_client = let mut run_client = |state: Option<_>,
|state: Option<_>, mut mgr: LlmpRestartingEventManager<_, _, _>, core_id| { mut mgr: LlmpRestartingEventManager<_, _, _>,
let core_idx = options client_description: ClientDescription| {
.cores let core_id = client_description.core_id();
.position(core_id) let core_idx = options
.expect("Failed to get core index"); .cores
let files = corpus_files .position(core_id)
.iter() .expect("Failed to get core index");
.skip(files_per_core * core_idx) let files = corpus_files
.take(files_per_core) .iter()
.map(|x| x.path()) .skip(files_per_core * core_idx)
.collect::<Vec<PathBuf>>(); .take(files_per_core)
.map(|x| x.path())
if files.is_empty() { .collect::<Vec<PathBuf>>();
mgr.send_exiting()?;
Err(Error::ShuttingDown)?
}
#[allow(clippy::let_unit_value)]
let mut feedback = ();
#[allow(clippy::let_unit_value)]
let mut objective = ();
let mut state = state.unwrap_or_else(|| {
StdState::new(
StdRand::new(),
NopCorpus::new(),
NopCorpus::new(),
&mut feedback,
&mut objective,
)
.unwrap()
});
let scheduler = QueueScheduler::new();
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
let mut cov_path = options.coverage_path.clone();
let coverage_name = cov_path.file_stem().unwrap().to_str().unwrap();
let coverage_extension = cov_path.extension().unwrap_or_default().to_str().unwrap();
let core = core_id.0;
cov_path.set_file_name(format!("{coverage_name}-{core:03}.{coverage_extension}"));
let emulator_modules = tuple_list!(DrCovModule::builder()
.filter(StdAddressFilter::default())
.filename(cov_path)
.full_trace(false)
.build());
let emulator = Emulator::empty()
.qemu(qemu)
.modules(emulator_modules)
.build()?;
let mut executor = QemuExecutor::new(
emulator,
&mut harness,
(),
&mut fuzzer,
&mut state,
&mut mgr,
options.timeout,
)
.expect("Failed to create QemuExecutor");
if state.must_load_initial_inputs() {
state
.load_initial_inputs_by_filenames(&mut fuzzer, &mut executor, &mut mgr, &files)
.unwrap_or_else(|_| {
println!("Failed to load initial corpus at {:?}", &options.input_dir);
process::exit(0);
});
log::debug!("We imported {} inputs from disk.", state.corpus().count());
}
log::debug!("Processed {} inputs from disk.", files.len());
if files.is_empty() {
mgr.send_exiting()?; mgr.send_exiting()?;
Err(Error::ShuttingDown)? Err(Error::ShuttingDown)?
}; }
#[allow(clippy::let_unit_value)]
let mut feedback = ();
#[allow(clippy::let_unit_value)]
let mut objective = ();
let mut state = state.unwrap_or_else(|| {
StdState::new(
StdRand::new(),
NopCorpus::new(),
NopCorpus::new(),
&mut feedback,
&mut objective,
)
.unwrap()
});
let scheduler = QueueScheduler::new();
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
let mut cov_path = options.coverage_path.clone();
let coverage_name = cov_path.file_stem().unwrap().to_str().unwrap();
let coverage_extension = cov_path.extension().unwrap_or_default().to_str().unwrap();
let core = core_id.0;
cov_path.set_file_name(format!("{coverage_name}-{core:03}.{coverage_extension}"));
let emulator_modules = tuple_list!(DrCovModule::builder()
.filter(StdAddressFilter::default())
.filename(cov_path)
.full_trace(false)
.build());
let emulator = Emulator::empty()
.qemu(qemu)
.modules(emulator_modules)
.build()?;
let mut executor = QemuExecutor::new(
emulator,
&mut harness,
(),
&mut fuzzer,
&mut state,
&mut mgr,
options.timeout,
)
.expect("Failed to create QemuExecutor");
if state.must_load_initial_inputs() {
state
.load_initial_inputs_by_filenames(&mut fuzzer, &mut executor, &mut mgr, &files)
.unwrap_or_else(|_| {
println!("Failed to load initial corpus at {:?}", &options.input_dir);
process::exit(0);
});
log::debug!("We imported {} inputs from disk.", state.corpus().count());
}
log::debug!("Processed {} inputs from disk.", files.len());
mgr.send_exiting()?;
Err(Error::ShuttingDown)?
};
match Launcher::builder() match Launcher::builder()
.shmem_provider(StdShMemProvider::new().expect("Failed to init shared memory")) .shmem_provider(StdShMemProvider::new().expect("Failed to init shared memory"))

View File

@ -2,12 +2,13 @@ use std::env;
use libafl::{ use libafl::{
corpus::{InMemoryOnDiskCorpus, OnDiskCorpus}, corpus::{InMemoryOnDiskCorpus, OnDiskCorpus},
events::ClientDescription,
inputs::BytesInput, inputs::BytesInput,
monitors::Monitor, monitors::Monitor,
state::StdState, state::StdState,
Error, Error,
}; };
use libafl_bolts::{core_affinity::CoreId, rands::StdRand, tuples::tuple_list}; use libafl_bolts::{rands::StdRand, tuples::tuple_list};
#[cfg(feature = "injections")] #[cfg(feature = "injections")]
use libafl_qemu::modules::injections::InjectionModule; use libafl_qemu::modules::injections::InjectionModule;
use libafl_qemu::{ use libafl_qemu::{
@ -61,8 +62,9 @@ impl Client<'_> {
&self, &self,
state: Option<ClientState>, state: Option<ClientState>,
mgr: ClientMgr<M>, mgr: ClientMgr<M>,
core_id: CoreId, client_description: ClientDescription,
) -> Result<(), Error> { ) -> Result<(), Error> {
let core_id = client_description.core_id();
let mut args = self.args()?; let mut args = self.args()?;
Harness::edit_args(&mut args); Harness::edit_args(&mut args);
log::debug!("ARGS: {:#?}", args); log::debug!("ARGS: {:#?}", args);
@ -123,7 +125,7 @@ impl Client<'_> {
.qemu(qemu) .qemu(qemu)
.harness(harness) .harness(harness)
.mgr(mgr) .mgr(mgr)
.core_id(core_id) .client_description(client_description)
.extra_tokens(extra_tokens); .extra_tokens(extra_tokens);
if self.options.rerun_input.is_some() && self.options.drcov.is_some() { if self.options.rerun_input.is_some() && self.options.drcov.is_some() {

View File

@ -10,7 +10,7 @@ use libafl::events::SimpleEventManager;
#[cfg(not(feature = "simplemgr"))] #[cfg(not(feature = "simplemgr"))]
use libafl::events::{EventConfig, Launcher, MonitorTypedEventManager}; use libafl::events::{EventConfig, Launcher, MonitorTypedEventManager};
use libafl::{ use libafl::{
events::{LlmpEventManager, LlmpRestartingEventManager}, events::{ClientDescription, LlmpEventManager, LlmpRestartingEventManager},
monitors::{tui::TuiMonitor, Monitor, MultiMonitor}, monitors::{tui::TuiMonitor, Monitor, MultiMonitor},
Error, Error,
}; };
@ -124,7 +124,7 @@ impl Fuzzer {
.unwrap(), .unwrap(),
StateRestorer::new(shmem_provider.new_shmem(0x1000).unwrap()), StateRestorer::new(shmem_provider.new_shmem(0x1000).unwrap()),
)), )),
CoreId(0), ClientDescription::new(0, 0, CoreId(0)),
); );
} }

View File

@ -7,7 +7,7 @@ use libafl::events::SimpleEventManager;
use libafl::events::{LlmpRestartingEventManager, MonitorTypedEventManager}; use libafl::events::{LlmpRestartingEventManager, MonitorTypedEventManager};
use libafl::{ use libafl::{
corpus::{Corpus, InMemoryOnDiskCorpus, OnDiskCorpus}, corpus::{Corpus, InMemoryOnDiskCorpus, OnDiskCorpus},
events::{EventRestarter, NopEventManager}, events::{ClientDescription, EventRestarter, NopEventManager},
executors::{Executor, ShadowExecutor}, executors::{Executor, ShadowExecutor},
feedback_or, feedback_or_fast, feedback_or, feedback_or_fast,
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback}, feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
@ -32,7 +32,6 @@ use libafl::{
#[cfg(not(feature = "simplemgr"))] #[cfg(not(feature = "simplemgr"))]
use libafl_bolts::shmem::StdShMemProvider; use libafl_bolts::shmem::StdShMemProvider;
use libafl_bolts::{ use libafl_bolts::{
core_affinity::CoreId,
ownedref::OwnedMutSlice, ownedref::OwnedMutSlice,
rands::StdRand, rands::StdRand,
tuples::{tuple_list, Merge, Prepend}, tuples::{tuple_list, Merge, Prepend},
@ -66,7 +65,7 @@ pub struct Instance<'a, M: Monitor> {
harness: Option<Harness>, harness: Option<Harness>,
qemu: Qemu, qemu: Qemu,
mgr: ClientMgr<M>, mgr: ClientMgr<M>,
core_id: CoreId, client_description: ClientDescription,
#[builder(default)] #[builder(default)]
extra_tokens: Vec<String>, extra_tokens: Vec<String>,
#[builder(default=PhantomData)] #[builder(default=PhantomData)]
@ -161,10 +160,12 @@ impl<M: Monitor> Instance<'_, M> {
// RNG // RNG
StdRand::new(), StdRand::new(),
// Corpus that will be evolved, we keep it in memory for performance // Corpus that will be evolved, we keep it in memory for performance
InMemoryOnDiskCorpus::no_meta(self.options.queue_dir(self.core_id))?, InMemoryOnDiskCorpus::no_meta(
self.options.queue_dir(self.client_description.clone()),
)?,
// Corpus in which we store solutions (crashes in this example), // Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer // on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new(self.options.crashes_dir(self.core_id))?, OnDiskCorpus::new(self.options.crashes_dir(self.client_description.clone()))?,
// States of the feedbacks. // States of the feedbacks.
// The feedbacks can report the data that should persist in the State. // The feedbacks can report the data that should persist in the State.
&mut feedback, &mut feedback,
@ -238,7 +239,10 @@ impl<M: Monitor> Instance<'_, M> {
process::exit(0); process::exit(0);
} }
if self.options.is_cmplog_core(self.core_id) { if self
.options
.is_cmplog_core(self.client_description.core_id())
{
// Create a QEMU in-process executor // Create a QEMU in-process executor
let executor = QemuExecutor::new( let executor = QemuExecutor::new(
emulator, emulator,

View File

@ -2,7 +2,7 @@ use core::time::Duration;
use std::{env, ops::Range, path::PathBuf}; use std::{env, ops::Range, path::PathBuf};
use clap::{error::ErrorKind, CommandFactory, Parser}; use clap::{error::ErrorKind, CommandFactory, Parser};
use libafl::Error; use libafl::{events::ClientDescription, Error};
use libafl_bolts::core_affinity::{CoreId, Cores}; use libafl_bolts::core_affinity::{CoreId, Cores};
use libafl_qemu::GuestAddr; use libafl_qemu::GuestAddr;
@ -144,20 +144,20 @@ impl FuzzerOptions {
PathBuf::from(&self.input) PathBuf::from(&self.input)
} }
pub fn output_dir(&self, core_id: CoreId) -> PathBuf { pub fn output_dir(&self, client_description: ClientDescription) -> PathBuf {
let mut dir = PathBuf::from(&self.output); let mut dir = PathBuf::from(&self.output);
dir.push(format!("cpu_{:03}", core_id.0)); dir.push(format!("client_{:03}", client_description.id()));
dir dir
} }
pub fn queue_dir(&self, core_id: CoreId) -> PathBuf { pub fn queue_dir(&self, client_description: ClientDescription) -> PathBuf {
let mut dir = self.output_dir(core_id).clone(); let mut dir = self.output_dir(client_description).clone();
dir.push("queue"); dir.push("queue");
dir dir
} }
pub fn crashes_dir(&self, core_id: CoreId) -> PathBuf { pub fn crashes_dir(&self, client_description: ClientDescription) -> PathBuf {
let mut dir = self.output_dir(core_id).clone(); let mut dir = self.output_dir(client_description).clone();
dir.push("crashes"); dir.push("crashes");
dir dir
} }

View File

@ -71,8 +71,6 @@ mod feedback;
mod scheduler; mod scheduler;
mod stages; mod stages;
use clap::Parser; use clap::Parser;
#[cfg(not(feature = "fuzzbench"))]
use corpus::remove_main_node_file;
use corpus::{check_autoresume, create_dir_if_not_exists}; use corpus::{check_autoresume, create_dir_if_not_exists};
mod corpus; mod corpus;
mod executor; mod executor;
@ -80,19 +78,23 @@ mod fuzzer;
mod hooks; mod hooks;
use env_parser::parse_envs; use env_parser::parse_envs;
use fuzzer::run_client; use fuzzer::run_client;
#[cfg(feature = "fuzzbench")]
use libafl::events::SimpleEventManager;
#[cfg(not(feature = "fuzzbench"))]
use libafl::events::{CentralizedLauncher, EventConfig};
#[cfg(not(feature = "fuzzbench"))]
use libafl::monitors::MultiMonitor;
#[cfg(feature = "fuzzbench")]
use libafl::monitors::SimpleMonitor;
use libafl::{schedulers::powersched::BaseSchedule, Error}; use libafl::{schedulers::powersched::BaseSchedule, Error};
use libafl_bolts::core_affinity::{CoreId, Cores}; use libafl_bolts::core_affinity::Cores;
#[cfg(not(feature = "fuzzbench"))]
use libafl_bolts::shmem::{ShMemProvider, StdShMemProvider};
use nix::sys::signal::Signal; use nix::sys::signal::Signal;
#[cfg(not(feature = "fuzzbench"))]
use {
corpus::remove_main_node_file,
libafl::{
events::{CentralizedLauncher, ClientDescription, EventConfig},
monitors::MultiMonitor,
},
libafl_bolts::shmem::{ShMemProvider, StdShMemProvider},
};
#[cfg(feature = "fuzzbench")]
use {
libafl::{events::SimpleEventManager, monitors::SimpleMonitor},
libafl_bolts::core_affinity::CoreId,
};
const AFL_DEFAULT_INPUT_LEN_MAX: usize = 1_048_576; const AFL_DEFAULT_INPUT_LEN_MAX: usize = 1_048_576;
const AFL_DEFAULT_INPUT_LEN_MIN: usize = 1; const AFL_DEFAULT_INPUT_LEN_MIN: usize = 1;
@ -139,22 +141,48 @@ fn main() {
.shmem_provider(shmem_provider) .shmem_provider(shmem_provider)
.configuration(EventConfig::from_name("default")) .configuration(EventConfig::from_name("default"))
.monitor(monitor) .monitor(monitor)
.main_run_client(|state: Option<_>, mgr: _, core_id: CoreId| { .main_run_client(
println!("run primary client on core {}", core_id.0); |state: Option<_>, mgr: _, client_description: ClientDescription| {
let fuzzer_dir = opt.output_dir.join("fuzzer_main"); println!(
let _ = check_autoresume(&fuzzer_dir, opt.auto_resume).unwrap(); "run primary client with id {} on core {}",
let res = run_client(state, mgr, &fuzzer_dir, core_id, &opt, true); client_description.id(),
let _ = remove_main_node_file(&fuzzer_dir); client_description.core_id().0
res );
}) let fuzzer_dir = opt.output_dir.join("fuzzer_main");
.secondary_run_client(|state: Option<_>, mgr: _, core_id: CoreId| { let _ = check_autoresume(&fuzzer_dir, opt.auto_resume).unwrap();
println!("run secondary client on core {}", core_id.0); let res = run_client(
let fuzzer_dir = opt state,
.output_dir mgr,
.join(format!("fuzzer_secondary_{}", core_id.0)); &fuzzer_dir,
let _ = check_autoresume(&fuzzer_dir, opt.auto_resume).unwrap(); client_description.core_id(),
run_client(state, mgr, &fuzzer_dir, core_id, &opt, false) &opt,
}) true,
);
let _ = remove_main_node_file(&fuzzer_dir);
res
},
)
.secondary_run_client(
|state: Option<_>, mgr: _, client_description: ClientDescription| {
println!(
"run secondary client with id {} on core {}",
client_description.id(),
client_description.core_id().0
);
let fuzzer_dir = opt
.output_dir
.join(format!("fuzzer_secondary_{}", client_description.id()));
let _ = check_autoresume(&fuzzer_dir, opt.auto_resume).unwrap();
run_client(
state,
mgr,
&fuzzer_dir,
client_description.core_id(),
&opt,
false,
)
},
)
.cores(&opt.cores.clone().expect("invariant; should never occur")) .cores(&opt.cores.clone().expect("invariant; should never occur"))
.broker_port(opt.broker_port.unwrap_or(AFL_DEFAULT_BROKER_PORT)) .broker_port(opt.broker_port.unwrap_or(AFL_DEFAULT_BROKER_PORT))
.build() .build()

View File

@ -2,7 +2,7 @@ use std::path::{Path, PathBuf};
use libafl::{ use libafl::{
corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus, Testcase}, corpus::{CachedOnDiskCorpus, Corpus, OnDiskCorpus, Testcase},
events::{launcher::Launcher, EventConfig}, events::{launcher::Launcher, ClientDescription, EventConfig},
feedbacks::{CrashFeedback, MaxMapFeedback}, feedbacks::{CrashFeedback, MaxMapFeedback},
inputs::BytesInput, inputs::BytesInput,
monitors::MultiMonitor, monitors::MultiMonitor,
@ -14,7 +14,7 @@ use libafl::{
Error, Fuzzer, StdFuzzer, Error, Fuzzer, StdFuzzer,
}; };
use libafl_bolts::{ use libafl_bolts::{
core_affinity::{CoreId, Cores}, core_affinity::Cores,
rands::StdRand, rands::StdRand,
shmem::{ShMemProvider, StdShMemProvider}, shmem::{ShMemProvider, StdShMemProvider},
tuples::tuple_list, tuples::tuple_list,
@ -31,10 +31,12 @@ fn main() {
let parent_cpu_id = cores.ids.first().expect("unable to get first core id"); let parent_cpu_id = cores.ids.first().expect("unable to get first core id");
// region: fuzzer start function // region: fuzzer start function
let mut run_client = |state: Option<_>, mut restarting_mgr, core_id: CoreId| { let mut run_client = |state: Option<_>,
mut restarting_mgr,
client_description: ClientDescription| {
// nyx stuff // nyx stuff
let settings = NyxSettings::builder() let settings = NyxSettings::builder()
.cpu_id(core_id.0) .cpu_id(client_description.core_id().0)
.parent_cpu_id(Some(parent_cpu_id.0)) .parent_cpu_id(Some(parent_cpu_id.0))
.build(); .build();
let helper = NyxHelper::new("/tmp/nyx_libxml2/", settings).unwrap(); let helper = NyxHelper::new("/tmp/nyx_libxml2/", settings).unwrap();

View File

@ -83,7 +83,7 @@ pub fn fuzz() {
.expect("Symbol or env BREAKPOINT not found"); .expect("Symbol or env BREAKPOINT not found");
println!("Breakpoint address = {breakpoint:#x}"); println!("Breakpoint address = {breakpoint:#x}");
let mut run_client = |state: Option<_>, mut mgr, _core_id| { let mut run_client = |state: Option<_>, mut mgr, _client_description| {
let args: Vec<String> = env::args().collect(); let args: Vec<String> = env::args().collect();
// The wrapped harness function, calling out to the LLVM-style harness // The wrapped harness function, calling out to the LLVM-style harness

View File

@ -80,7 +80,7 @@ pub fn fuzz() {
.expect("Symbol or env BREAKPOINT not found"); .expect("Symbol or env BREAKPOINT not found");
println!("Breakpoint address = {breakpoint:#x}"); println!("Breakpoint address = {breakpoint:#x}");
let mut run_client = |state: Option<_>, mut mgr, _core_id| { let mut run_client = |state: Option<_>, mut mgr, _client_description| {
let target_dir = env::var("TARGET_DIR").expect("TARGET_DIR env not set"); let target_dir = env::var("TARGET_DIR").expect("TARGET_DIR env not set");
// Create an observation channel using the coverage map // Create an observation channel using the coverage map

View File

@ -43,7 +43,7 @@ pub fn fuzz() {
let corpus_dirs = [PathBuf::from("./corpus")]; let corpus_dirs = [PathBuf::from("./corpus")];
let objective_dir = PathBuf::from("./crashes"); let objective_dir = PathBuf::from("./crashes");
let mut run_client = |state: Option<_>, mut mgr, _core_id| { let mut run_client = |state: Option<_>, mut mgr, _client_description| {
// Initialize QEMU // Initialize QEMU
let args: Vec<String> = env::args().collect(); let args: Vec<String> = env::args().collect();

View File

@ -57,7 +57,7 @@ pub fn fuzz() {
let corpus_dirs = [PathBuf::from("./corpus")]; let corpus_dirs = [PathBuf::from("./corpus")];
let objective_dir = PathBuf::from("./crashes"); let objective_dir = PathBuf::from("./crashes");
let mut run_client = |state: Option<_>, mut mgr, _core_id| { let mut run_client = |state: Option<_>, mut mgr, _client_description| {
// Initialize QEMU // Initialize QEMU
let args: Vec<String> = env::args().collect(); let args: Vec<String> = env::args().collect();

View File

@ -47,7 +47,7 @@ pub fn fuzz() {
let corpus_dirs = [PathBuf::from("./corpus")]; let corpus_dirs = [PathBuf::from("./corpus")];
let objective_dir = PathBuf::from("./crashes"); let objective_dir = PathBuf::from("./crashes");
let mut run_client = |state: Option<_>, mut mgr, _core_id| { let mut run_client = |state: Option<_>, mut mgr, _client_description| {
// Initialize QEMU // Initialize QEMU
let args: Vec<String> = env::args().collect(); let args: Vec<String> = env::args().collect();

View File

@ -130,7 +130,7 @@ pub extern "C" fn LLVMFuzzerRunDriver(
// TODO: we need to handle Atheris calls to `exit` on errors somhow. // TODO: we need to handle Atheris calls to `exit` on errors somhow.
let mut run_client = |state: Option<_>, mut mgr, _core_id| { let mut run_client = |state: Option<_>, mut mgr, _client_description| {
// Create an observation channel using the coverage map // Create an observation channel using the coverage map
let edges = unsafe { extra_counters() }; let edges = unsafe { extra_counters() };
println!("edges: {:?}", edges); println!("edges: {:?}", edges);

View File

@ -137,7 +137,7 @@ pub extern "C" fn libafl_main() {
let monitor = MultiMonitor::new(|s| println!("{s}")); let monitor = MultiMonitor::new(|s| println!("{s}"));
let mut run_client = |state: Option<_>, mut restarting_mgr, _core_id| { let mut run_client = |state: Option<_>, mut restarting_mgr, _client_description| {
// Create an observation channel using the coverage map // Create an observation channel using the coverage map
let edges_observer = HitcountsMapObserver::new(unsafe { let edges_observer = HitcountsMapObserver::new(unsafe {
StdMapObserver::from_mut_ptr("edges", EDGES_MAP.as_mut_ptr(), MAX_EDGES_FOUND) StdMapObserver::from_mut_ptr("edges", EDGES_MAP.as_mut_ptr(), MAX_EDGES_FOUND)

View File

@ -8,7 +8,10 @@ use std::{env, net::SocketAddr, path::PathBuf};
use clap::{self, Parser}; use clap::{self, Parser};
use libafl::{ use libafl::{
corpus::{Corpus, InMemoryCorpus, OnDiskCorpus}, corpus::{Corpus, InMemoryCorpus, OnDiskCorpus},
events::{centralized::CentralizedEventManager, launcher::CentralizedLauncher, EventConfig}, events::{
centralized::CentralizedEventManager, launcher::CentralizedLauncher, ClientDescription,
EventConfig,
},
executors::{inprocess::InProcessExecutor, ExitKind}, executors::{inprocess::InProcessExecutor, ExitKind},
feedback_or, feedback_or_fast, feedback_or, feedback_or_fast,
feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback}, feedbacks::{CrashFeedback, MaxMapFeedback, TimeFeedback, TimeoutFeedback},
@ -27,7 +30,7 @@ use libafl::{
Error, HasMetadata, Error, HasMetadata,
}; };
use libafl_bolts::{ use libafl_bolts::{
core_affinity::{CoreId, Cores}, core_affinity::Cores,
rands::StdRand, rands::StdRand,
shmem::{ShMemProvider, StdShMemProvider}, shmem::{ShMemProvider, StdShMemProvider},
tuples::{tuple_list, Merge}, tuples::{tuple_list, Merge},
@ -136,125 +139,129 @@ pub extern "C" fn libafl_main() {
let monitor = MultiMonitor::new(|s| println!("{s}")); let monitor = MultiMonitor::new(|s| println!("{s}"));
let mut secondary_run_client = |state: Option<_>, let mut secondary_run_client =
mut mgr: CentralizedEventManager<_, _, _, _>, |state: Option<_>,
_core_id: CoreId| { mut mgr: CentralizedEventManager<_, _, _, _>,
// Create an observation channel using the coverage map _client_description: ClientDescription| {
let edges_observer = // Create an observation channel using the coverage map
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices(); let edges_observer =
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") })
.track_indices();
// Create an observation channel to keep track of the execution time // Create an observation channel to keep track of the execution time
let time_observer = TimeObserver::new("time"); let time_observer = TimeObserver::new("time");
// Feedback to rate the interestingness of an input // Feedback to rate the interestingness of an input
// This one is composed by two Feedbacks in OR // This one is composed by two Feedbacks in OR
let mut feedback = feedback_or!( let mut feedback = feedback_or!(
// New maximization map feedback linked to the edges observer and the feedback state // New maximization map feedback linked to the edges observer and the feedback state
MaxMapFeedback::new(&edges_observer), MaxMapFeedback::new(&edges_observer),
// Time feedback, this one does not need a feedback state // Time feedback, this one does not need a feedback state
TimeFeedback::new(&time_observer) TimeFeedback::new(&time_observer)
); );
// A feedback to choose if an input is a solution or not // A feedback to choose if an input is a solution or not
let mut objective = feedback_or_fast!(CrashFeedback::new(), TimeoutFeedback::new()); let mut objective = feedback_or_fast!(CrashFeedback::new(), TimeoutFeedback::new());
// If not restarting, create a State from scratch // If not restarting, create a State from scratch
let mut state = state.unwrap_or_else(|| { let mut state = state.unwrap_or_else(|| {
StdState::new( StdState::new(
// RNG // RNG
StdRand::new(), StdRand::new(),
// Corpus that will be evolved, we keep it in memory for performance // Corpus that will be evolved, we keep it in memory for performance
InMemoryCorpus::new(), InMemoryCorpus::new(),
// Corpus in which we store solutions (crashes in this example), // Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer // on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new(&opt.output).unwrap(), OnDiskCorpus::new(&opt.output).unwrap(),
// States of the feedbacks. // States of the feedbacks.
// The feedbacks can report the data that should persist in the State. // The feedbacks can report the data that should persist in the State.
&mut feedback, &mut feedback,
// Same for objective feedbacks // Same for objective feedbacks
&mut objective, &mut objective,
) )
.unwrap() .unwrap()
}); });
println!("We're a client, let's fuzz :)"); println!("We're a client, let's fuzz :)");
// Create a PNG dictionary if not existing // Create a PNG dictionary if not existing
if state.metadata_map().get::<Tokens>().is_none() { if state.metadata_map().get::<Tokens>().is_none() {
state.add_metadata(Tokens::from([ state.add_metadata(Tokens::from([
vec![137, 80, 78, 71, 13, 10, 26, 10], // PNG header vec![137, 80, 78, 71, 13, 10, 26, 10], // PNG header
"IHDR".as_bytes().to_vec(), "IHDR".as_bytes().to_vec(),
"IDAT".as_bytes().to_vec(), "IDAT".as_bytes().to_vec(),
"PLTE".as_bytes().to_vec(), "PLTE".as_bytes().to_vec(),
"IEND".as_bytes().to_vec(), "IEND".as_bytes().to_vec(),
])); ]));
}
// Setup a basic mutator with a mutational stage
let mutator = StdScheduledMutator::new(havoc_mutations().merge(tokens_mutations()));
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
// A minimization+queue policy to get testcasess from the corpus
let scheduler =
IndexesLenTimeMinimizerScheduler::new(&edges_observer, QueueScheduler::new());
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// The wrapped harness function, calling out to the LLVM-style harness
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let buf = target.as_slice();
unsafe {
libfuzzer_test_one_input(buf);
} }
ExitKind::Ok
// Setup a basic mutator with a mutational stage
let mutator = StdScheduledMutator::new(havoc_mutations().merge(tokens_mutations()));
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
// A minimization+queue policy to get testcasess from the corpus
let scheduler =
IndexesLenTimeMinimizerScheduler::new(&edges_observer, QueueScheduler::new());
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// The wrapped harness function, calling out to the LLVM-style harness
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let buf = target.as_slice();
unsafe {
libfuzzer_test_one_input(buf);
}
ExitKind::Ok
};
// Create the executor for an in-process function with one observer for edge coverage and one for the execution time
#[cfg(target_os = "linux")]
let mut executor = InProcessExecutor::batched_timeout(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
#[cfg(not(target_os = "linux"))]
let mut executor = InProcessExecutor::with_timeout(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
// The actual target run starts here.
// Call LLVMFUzzerInitialize() if present.
let args: Vec<String> = env::args().collect();
if unsafe { libfuzzer_initialize(&args) } == -1 {
println!("Warning: LLVMFuzzerInitialize failed with -1");
}
// In case the corpus is empty (on first run), reset
if state.must_load_initial_inputs() {
state
.load_initial_inputs(&mut fuzzer, &mut executor, &mut mgr, &opt.input)
.unwrap_or_else(|_| {
panic!("Failed to load initial corpus at {:?}", &opt.input)
});
println!("We imported {} inputs from disk.", state.corpus().count());
}
if !mgr.is_main() {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
} else {
let mut empty_stages = tuple_list!();
fuzzer.fuzz_loop(&mut empty_stages, &mut executor, &mut state, &mut mgr)?;
}
Ok(())
}; };
// Create the executor for an in-process function with one observer for edge coverage and one for the execution time
#[cfg(target_os = "linux")]
let mut executor = InProcessExecutor::batched_timeout(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
#[cfg(not(target_os = "linux"))]
let mut executor = InProcessExecutor::with_timeout(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
// The actual target run starts here.
// Call LLVMFUzzerInitialize() if present.
let args: Vec<String> = env::args().collect();
if unsafe { libfuzzer_initialize(&args) } == -1 {
println!("Warning: LLVMFuzzerInitialize failed with -1");
}
// In case the corpus is empty (on first run), reset
if state.must_load_initial_inputs() {
state
.load_initial_inputs(&mut fuzzer, &mut executor, &mut mgr, &opt.input)
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &opt.input));
println!("We imported {} inputs from disk.", state.corpus().count());
}
if !mgr.is_main() {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
} else {
let mut empty_stages = tuple_list!();
fuzzer.fuzz_loop(&mut empty_stages, &mut executor, &mut state, &mut mgr)?;
}
Ok(())
};
let mut main_run_client = secondary_run_client.clone(); // clone it just for borrow checker let mut main_run_client = secondary_run_client.clone(); // clone it just for borrow checker
match CentralizedLauncher::builder() match CentralizedLauncher::builder()

View File

@ -112,7 +112,7 @@ windows_alias = "unsupported"
script_runner = "@shell" script_runner = "@shell"
script = ''' script = '''
rm -rf libafl_unix_shmem_server || true rm -rf libafl_unix_shmem_server || true
timeout 31s ./${FUZZER_NAME}.coverage --broker-port 21337 --cores 0 --input ./corpus 2>/dev/null | tee fuzz_stdout.log || true timeout 31s ./${FUZZER_NAME}.coverage --broker-port 21337 --cores 0 --input ./corpus | tee fuzz_stdout.log || true
if grep -qa "corpus: 30" fuzz_stdout.log; then if grep -qa "corpus: 30" fuzz_stdout.log; then
echo "Fuzzer is working" echo "Fuzzer is working"
else else

View File

@ -56,11 +56,19 @@ struct Opt {
short, short,
long, long,
value_parser = Cores::from_cmdline, value_parser = Cores::from_cmdline,
help = "Spawn a client in each of the provided cores. Broker runs in the 0th core. 'all' to select all available cores. 'none' to run a client without binding to any core. eg: '1,2-4,6' selects the cores 1,2,3,4,6.", help = "Spawn clients in each of the provided cores. Broker runs in the 0th core. 'all' to select all available cores. 'none' to run a client without binding to any core. eg: '1,2-4,6' selects the cores 1,2,3,4,6.",
name = "CORES" name = "CORES"
)] )]
cores: Cores, cores: Cores,
#[arg(
long,
help = "Spawn n clients on each core, this is useful if clients don't fully load a client, e.g. because they `sleep` often.",
name = "OVERCOMMIT",
default_value = "1"
)]
overcommit: usize,
#[arg( #[arg(
short = 'p', short = 'p',
long, long,
@ -137,7 +145,7 @@ pub extern "C" fn libafl_main() {
MultiMonitor::new(|s| println!("{s}")), MultiMonitor::new(|s| println!("{s}")),
); );
let mut run_client = |state: Option<_>, mut restarting_mgr, _core_id| { let mut run_client = |state: Option<_>, mut restarting_mgr, _client_description| {
// Create an observation channel using the coverage map // Create an observation channel using the coverage map
let edges_observer = let edges_observer =
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices(); HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices();
@ -256,6 +264,7 @@ pub extern "C" fn libafl_main() {
.monitor(monitor) .monitor(monitor)
.run_client(&mut run_client) .run_client(&mut run_client)
.cores(&cores) .cores(&cores)
.overcommit(opt.overcommit)
.broker_port(broker_port) .broker_port(broker_port)
.remote_broker_addr(opt.remote_broker_addr) .remote_broker_addr(opt.remote_broker_addr)
.stdout_file(Some("/dev/null")) .stdout_file(Some("/dev/null"))

View File

@ -10,8 +10,9 @@ use clap::Parser;
use libafl::{ use libafl::{
corpus::{Corpus, InMemoryOnDiskCorpus, OnDiskCorpus}, corpus::{Corpus, InMemoryOnDiskCorpus, OnDiskCorpus},
events::{ events::{
launcher::Launcher, llmp::LlmpShouldSaveState, EventConfig, EventRestarter, launcher::{ClientDescription, Launcher},
LlmpRestartingEventManager, llmp::LlmpShouldSaveState,
EventConfig, EventRestarter, LlmpRestartingEventManager,
}, },
executors::{inprocess::InProcessExecutor, ExitKind}, executors::{inprocess::InProcessExecutor, ExitKind},
feedback_or, feedback_or_fast, feedback_or, feedback_or_fast,
@ -162,7 +163,7 @@ pub extern "C" fn libafl_main() {
let mut run_client = |state: Option<_>, let mut run_client = |state: Option<_>,
mut restarting_mgr: LlmpRestartingEventManager<_, _, _>, mut restarting_mgr: LlmpRestartingEventManager<_, _, _>,
core_id| { client_description: ClientDescription| {
// Create an observation channel using the coverage map // Create an observation channel using the coverage map
let edges_observer = let edges_observer =
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices(); HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices();
@ -259,7 +260,7 @@ pub extern "C" fn libafl_main() {
&mut executor, &mut executor,
&mut restarting_mgr, &mut restarting_mgr,
&opt.input, &opt.input,
&core_id, &client_description.core_id(),
&cores, &cores,
) )
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &opt.input)); .unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &opt.input));

View File

@ -10,7 +10,7 @@ use libafl::{
corpus::{Corpus, InMemoryCorpus, OnDiskCorpus}, corpus::{Corpus, InMemoryCorpus, OnDiskCorpus},
events::{ events::{
centralized::CentralizedEventManager, launcher::CentralizedLauncher, centralized::CentralizedEventManager, launcher::CentralizedLauncher,
multi_machine::NodeDescriptor, EventConfig, multi_machine::NodeDescriptor, ClientDescription, EventConfig,
}, },
executors::{inprocess::InProcessExecutor, ExitKind}, executors::{inprocess::InProcessExecutor, ExitKind},
feedback_or, feedback_or_fast, feedback_or, feedback_or_fast,
@ -30,7 +30,7 @@ use libafl::{
Error, HasMetadata, Error, HasMetadata,
}; };
use libafl_bolts::{ use libafl_bolts::{
core_affinity::{CoreId, Cores}, core_affinity::Cores,
rands::StdRand, rands::StdRand,
shmem::{ShMemProvider, StdShMemProvider}, shmem::{ShMemProvider, StdShMemProvider},
tuples::{tuple_list, Merge}, tuples::{tuple_list, Merge},
@ -155,132 +155,134 @@ pub extern "C" fn libafl_main() {
let monitor = MultiMonitor::new(|s| println!("{s}")); let monitor = MultiMonitor::new(|s| println!("{s}"));
let mut secondary_run_client = |state: Option<_>, let mut secondary_run_client =
mut mgr: CentralizedEventManager<_, _, _, _>, |state: Option<_>,
_core_id: CoreId| { mut mgr: CentralizedEventManager<_, _, _, _>,
// Create an observation channel using the coverage map _client_description: ClientDescription| {
let edges_observer = // Create an observation channel using the coverage map
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") }).track_indices(); let edges_observer =
HitcountsMapObserver::new(unsafe { std_edges_map_observer("edges") })
.track_indices();
// Create an observation channel to keep track of the execution time // Create an observation channel to keep track of the execution time
let time_observer = TimeObserver::new("time"); let time_observer = TimeObserver::new("time");
// Feedback to rate the interestingness of an input // Feedback to rate the interestingness of an input
// This one is composed by two Feedbacks in OR // This one is composed by two Feedbacks in OR
let mut feedback = feedback_or!( let mut feedback = feedback_or!(
// New maximization map feedback linked to the edges observer and the feedback state // New maximization map feedback linked to the edges observer and the feedback state
MaxMapFeedback::new(&edges_observer), MaxMapFeedback::new(&edges_observer),
// Time feedback, this one does not need a feedback state // Time feedback, this one does not need a feedback state
TimeFeedback::new(&time_observer) TimeFeedback::new(&time_observer)
); );
// A feedback to choose if an input is a solution or not // A feedback to choose if an input is a solution or not
let mut objective = feedback_or_fast!(CrashFeedback::new(), TimeoutFeedback::new()); let mut objective = feedback_or_fast!(CrashFeedback::new(), TimeoutFeedback::new());
// If not restarting, create a State from scratch // If not restarting, create a State from scratch
let mut state = state.unwrap_or_else(|| { let mut state = state.unwrap_or_else(|| {
StdState::new( StdState::new(
// RNG // RNG
StdRand::new(), StdRand::new(),
// Corpus that will be evolved, we keep it in memory for performance // Corpus that will be evolved, we keep it in memory for performance
InMemoryCorpus::new(), InMemoryCorpus::new(),
// Corpus in which we store solutions (crashes in this example), // Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer // on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new(&opt.output).unwrap(), OnDiskCorpus::new(&opt.output).unwrap(),
// States of the feedbacks. // States of the feedbacks.
// The feedbacks can report the data that should persist in the State. // The feedbacks can report the data that should persist in the State.
&mut feedback, &mut feedback,
// Same for objective feedbacks // Same for objective feedbacks
&mut objective, &mut objective,
) )
.unwrap() .unwrap()
}); });
println!("We're a client, let's fuzz :)"); println!("We're a client, let's fuzz :)");
// Create a PNG dictionary if not existing // Create a PNG dictionary if not existing
if state.metadata_map().get::<Tokens>().is_none() { if state.metadata_map().get::<Tokens>().is_none() {
state.add_metadata(Tokens::from([ state.add_metadata(Tokens::from([
vec![137, 80, 78, 71, 13, 10, 26, 10], // PNG header vec![137, 80, 78, 71, 13, 10, 26, 10], // PNG header
"IHDR".as_bytes().to_vec(), "IHDR".as_bytes().to_vec(),
"IDAT".as_bytes().to_vec(), "IDAT".as_bytes().to_vec(),
"PLTE".as_bytes().to_vec(), "PLTE".as_bytes().to_vec(),
"IEND".as_bytes().to_vec(), "IEND".as_bytes().to_vec(),
])); ]));
}
// Setup a basic mutator with a mutational stage
let mutator = StdScheduledMutator::new(havoc_mutations().merge(tokens_mutations()));
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
// A minimization+queue policy to get testcasess from the corpus
let scheduler =
IndexesLenTimeMinimizerScheduler::new(&edges_observer, QueueScheduler::new());
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// The wrapped harness function, calling out to the LLVM-style harness
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let buf = target.as_slice();
unsafe {
libfuzzer_test_one_input(buf);
} }
ExitKind::Ok
// Setup a basic mutator with a mutational stage
let mutator = StdScheduledMutator::new(havoc_mutations().merge(tokens_mutations()));
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
// A minimization+queue policy to get testcasess from the corpus
let scheduler =
IndexesLenTimeMinimizerScheduler::new(&edges_observer, QueueScheduler::new());
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// The wrapped harness function, calling out to the LLVM-style harness
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let buf = target.as_slice();
unsafe {
libfuzzer_test_one_input(buf);
}
ExitKind::Ok
};
// Create the executor for an in-process function with one observer for edge coverage and one for the execution time
#[cfg(target_os = "linux")]
let mut executor = InProcessExecutor::batched_timeout(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
#[cfg(not(target_os = "linux"))]
let mut executor = InProcessExecutor::with_timeout(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
// The actual target run starts here.
// Call LLVMFUzzerInitialize() if present.
let args: Vec<String> = env::args().collect();
if unsafe { libfuzzer_initialize(&args) } == -1 {
println!("Warning: LLVMFuzzerInitialize failed with -1");
}
// In case the corpus is empty (on first run), reset
if state.must_load_initial_inputs() {
state
.load_initial_inputs(&mut fuzzer, &mut executor, &mut mgr, &opt.input)
.unwrap_or_else(|_| {
panic!("Failed to load initial corpus at {:?}", &opt.input)
});
println!("We imported {} inputs from disk.", state.corpus().count());
}
if !mgr.is_main() {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
} else {
let mut empty_stages = tuple_list!();
fuzzer.fuzz_loop(&mut empty_stages, &mut executor, &mut state, &mut mgr)?;
}
Ok(())
}; };
// Create the executor for an in-process function with one observer for edge coverage and one for the execution time let mut main_run_client = secondary_run_client; // clone it just for borrow checker
#[cfg(target_os = "linux")]
let mut executor = InProcessExecutor::batched_timeout(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
#[cfg(not(target_os = "linux"))] let parent_addr: Option<SocketAddr> = opt
let mut executor = InProcessExecutor::with_timeout( .parent_addr
&mut harness, .map(|parent_str| SocketAddr::from_str(parent_str.as_str()).expect("Wrong parent address"));
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut mgr,
opt.timeout,
)?;
// The actual target run starts here.
// Call LLVMFUzzerInitialize() if present.
let args: Vec<String> = env::args().collect();
if unsafe { libfuzzer_initialize(&args) } == -1 {
println!("Warning: LLVMFuzzerInitialize failed with -1");
}
// In case the corpus is empty (on first run), reset
if state.must_load_initial_inputs() {
state
.load_initial_inputs(&mut fuzzer, &mut executor, &mut mgr, &opt.input)
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &opt.input));
println!("We imported {} inputs from disk.", state.corpus().count());
}
if !mgr.is_main() {
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)?;
} else {
let mut empty_stages = tuple_list!();
fuzzer.fuzz_loop(&mut empty_stages, &mut executor, &mut state, &mut mgr)?;
}
Ok(())
};
let mut main_run_client = secondary_run_client.clone(); // clone it just for borrow checker
let parent_addr: Option<SocketAddr> = if let Some(parent_str) = opt.parent_addr {
Some(SocketAddr::from_str(parent_str.as_str()).expect("Wrong parent address"))
} else {
None
};
let mut node_description = NodeDescriptor::builder().parent_addr(parent_addr).build(); let mut node_description = NodeDescriptor::builder().parent_addr(parent_addr).build();

View File

@ -134,7 +134,7 @@ pub extern "C" fn libafl_main() {
// to disconnect the event coverter from the broker later // to disconnect the event coverter from the broker later
// call detach_from_broker( port) // call detach_from_broker( port)
let mut run_client = |state: Option<_>, mut mgr, _core_id| { let mut run_client = |state: Option<_>, mut mgr, _client_description| {
let mut bytes = vec![]; let mut bytes = vec![];
// The closure that we want to fuzz // The closure that we want to fuzz

View File

@ -12,66 +12,52 @@
//! On `Unix` systems, the [`Launcher`] will use `fork` if the `fork` feature is used for `LibAFL`. //! On `Unix` systems, the [`Launcher`] will use `fork` if the `fork` feature is used for `LibAFL`.
//! Else, it will start subsequent nodes with the same commandline, and will set special `env` variables accordingly. //! Else, it will start subsequent nodes with the same commandline, and will set special `env` variables accordingly.
use alloc::string::ToString;
#[cfg(feature = "std")]
use core::time::Duration;
use core::{ use core::{
fmt::{self, Debug, Formatter}, fmt::{self, Debug, Formatter},
num::NonZeroUsize, num::NonZeroUsize,
time::Duration,
}; };
#[cfg(all(unix, feature = "std", feature = "fork"))] use std::{net::SocketAddr, string::String};
use std::boxed::Box;
#[cfg(feature = "std")]
use std::net::SocketAddr;
#[cfg(all(feature = "std", any(windows, not(feature = "fork"))))]
use std::process::Stdio;
#[cfg(all(unix, feature = "std"))]
use std::{fs::File, os::unix::io::AsRawFd};
#[cfg(all(unix, feature = "std", feature = "fork"))]
use libafl_bolts::llmp::Broker;
#[cfg(all(unix, feature = "std", feature = "fork"))]
use libafl_bolts::llmp::Brokers;
#[cfg(all(unix, feature = "std", feature = "fork"))]
use libafl_bolts::llmp::LlmpBroker;
#[cfg(all(unix, feature = "std"))]
use libafl_bolts::os::dup2;
#[cfg(all(feature = "std", any(windows, not(feature = "fork"))))]
use libafl_bolts::os::startable_self;
#[cfg(all(unix, feature = "std", feature = "fork"))]
use libafl_bolts::{
core_affinity::get_core_ids,
os::{fork, ForkResult},
};
use libafl_bolts::{ use libafl_bolts::{
core_affinity::{CoreId, Cores}, core_affinity::{CoreId, Cores},
shmem::ShMemProvider, shmem::ShMemProvider,
tuples::{tuple_list, Handle}, tuples::{tuple_list, Handle},
}; };
#[cfg(feature = "std")] use serde::{Deserialize, Serialize};
use typed_builder::TypedBuilder; use typed_builder::TypedBuilder;
#[cfg(all(unix, feature = "fork"))]
use {
crate::{
events::{centralized::CentralizedEventManager, CentralizedLlmpHook, StdLlmpEventHook},
inputs::UsesInput,
state::UsesState,
},
alloc::string::ToString,
libafl_bolts::{
core_affinity::get_core_ids,
llmp::{Broker, Brokers, LlmpBroker},
os::{fork, ForkResult},
},
std::boxed::Box,
};
#[cfg(unix)]
use {
libafl_bolts::os::dup2,
std::{fs::File, os::unix::io::AsRawFd},
};
#[cfg(any(windows, not(feature = "fork")))]
use {libafl_bolts::os::startable_self, std::process::Stdio};
use super::EventManagerHooksTuple; #[cfg(all(unix, feature = "fork", feature = "multi_machine"))]
#[cfg(all(unix, feature = "std", feature = "fork"))] use crate::events::multi_machine::{NodeDescriptor, TcpMultiMachineHooks};
use super::StdLlmpEventHook;
#[cfg(all(unix, feature = "std", feature = "fork", feature = "multi_machine"))]
use crate::events::multi_machine::NodeDescriptor;
#[cfg(all(unix, feature = "std", feature = "fork", feature = "multi_machine"))]
use crate::events::multi_machine::TcpMultiMachineHooks;
#[cfg(all(unix, feature = "std", feature = "fork"))]
use crate::events::{centralized::CentralizedEventManager, CentralizedLlmpHook};
#[cfg(all(unix, feature = "std", feature = "fork"))]
use crate::inputs::UsesInput;
use crate::observers::TimeObserver;
#[cfg(all(unix, feature = "std", feature = "fork"))]
use crate::state::UsesState;
#[cfg(feature = "std")]
use crate::{ use crate::{
events::{ events::{
llmp::{LlmpRestartingEventManager, LlmpShouldSaveState, ManagerKind, RestartingMgr}, llmp::{LlmpRestartingEventManager, LlmpShouldSaveState, ManagerKind, RestartingMgr},
EventConfig, EventConfig, EventManagerHooksTuple,
}, },
monitors::Monitor, monitors::Monitor,
observers::TimeObserver,
state::{HasExecutions, State}, state::{HasExecutions, State},
Error, Error,
}; };
@ -83,15 +69,67 @@ const _AFL_LAUNCHER_CLIENT: &str = "AFL_LAUNCHER_CLIENT";
#[cfg(all(feature = "fork", unix))] #[cfg(all(feature = "fork", unix))]
const LIBAFL_DEBUG_OUTPUT: &str = "LIBAFL_DEBUG_OUTPUT"; const LIBAFL_DEBUG_OUTPUT: &str = "LIBAFL_DEBUG_OUTPUT";
/// Information about this client from the launcher
#[derive(Debug, Clone, Serialize, Deserialize)]
pub struct ClientDescription {
id: usize,
overcommit_id: usize,
core_id: CoreId,
}
impl ClientDescription {
/// Create a [`ClientDescription`]
#[must_use]
pub fn new(id: usize, overcommit_id: usize, core_id: CoreId) -> Self {
Self {
id,
overcommit_id,
core_id,
}
}
/// Id unique to all clients spawned by this launcher
#[must_use]
pub fn id(&self) -> usize {
self.id
}
/// [`CoreId`] this client is bound to
#[must_use]
pub fn core_id(&self) -> CoreId {
self.core_id
}
/// Incremental id unique for all clients on the same core
#[must_use]
pub fn overcommit_id(&self) -> usize {
self.overcommit_id
}
/// Create a string representation safe for environment variables
#[must_use]
pub fn to_safe_string(&self) -> String {
format!("{}_{}_{}", self.id, self.overcommit_id, self.core_id.0)
}
/// Parse the string created by [`Self::to_safe_string`].
#[must_use]
pub fn from_safe_string(input: &str) -> Self {
let mut iter = input.split('_');
let id = iter.next().unwrap().parse().unwrap();
let overcommit_id = iter.next().unwrap().parse().unwrap();
let core_id = iter.next().unwrap().parse::<usize>().unwrap().into();
Self {
id,
overcommit_id,
core_id,
}
}
}
/// Provides a [`Launcher`], which can be used to launch a fuzzing run on a specified list of cores /// Provides a [`Launcher`], which can be used to launch a fuzzing run on a specified list of cores
/// ///
/// Will hide child output, unless the settings indicate otherwise, or the `LIBAFL_DEBUG_OUTPUT` env variable is set. /// Will hide child output, unless the settings indicate otherwise, or the `LIBAFL_DEBUG_OUTPUT` env variable is set.
#[cfg(feature = "std")]
#[allow(
clippy::type_complexity,
missing_debug_implementations,
clippy::ignored_unit_patterns
)]
#[derive(TypedBuilder)] #[derive(TypedBuilder)]
pub struct Launcher<'a, CF, MT, SP> { pub struct Launcher<'a, CF, MT, SP> {
/// The `ShmemProvider` to use /// The `ShmemProvider` to use
@ -158,7 +196,7 @@ impl<CF, MT, SP> Debug for Launcher<'_, CF, MT, SP> {
.field("core", &self.cores) .field("core", &self.cores)
.field("spawn_broker", &self.spawn_broker) .field("spawn_broker", &self.spawn_broker)
.field("remote_broker_addr", &self.remote_broker_addr); .field("remote_broker_addr", &self.remote_broker_addr);
#[cfg(all(unix, feature = "std"))] #[cfg(unix)]
{ {
dbg_struct dbg_struct
.field("stdout_file", &self.stdout_file) .field("stdout_file", &self.stdout_file)
@ -175,21 +213,20 @@ where
SP: ShMemProvider, SP: ShMemProvider,
{ {
/// Launch the broker and the clients and fuzz /// Launch the broker and the clients and fuzz
#[cfg(all( #[cfg(any(windows, not(feature = "fork"), all(unix, feature = "fork")))]
feature = "std",
any(windows, not(feature = "fork"), all(unix, feature = "fork"))
))]
#[allow(unused_mut, clippy::match_wild_err_arm)]
pub fn launch<S>(&mut self) -> Result<(), Error> pub fn launch<S>(&mut self) -> Result<(), Error>
where where
S: State + HasExecutions, S: State + HasExecutions,
CF: FnOnce(Option<S>, LlmpRestartingEventManager<(), S, SP>, CoreId) -> Result<(), Error>, CF: FnOnce(
Option<S>,
LlmpRestartingEventManager<(), S, SP>,
ClientDescription,
) -> Result<(), Error>,
{ {
Self::launch_with_hooks(self, tuple_list!()) Self::launch_with_hooks(self, tuple_list!())
} }
} }
#[cfg(feature = "std")]
impl<CF, MT, SP> Launcher<'_, CF, MT, SP> impl<CF, MT, SP> Launcher<'_, CF, MT, SP>
where where
MT: Monitor + Clone, MT: Monitor + Clone,
@ -197,12 +234,15 @@ where
{ {
/// Launch the broker and the clients and fuzz with a user-supplied hook /// Launch the broker and the clients and fuzz with a user-supplied hook
#[cfg(all(unix, feature = "fork"))] #[cfg(all(unix, feature = "fork"))]
#[allow(clippy::similar_names, clippy::too_many_lines)]
pub fn launch_with_hooks<EMH, S>(&mut self, hooks: EMH) -> Result<(), Error> pub fn launch_with_hooks<EMH, S>(&mut self, hooks: EMH) -> Result<(), Error>
where where
S: State + HasExecutions, S: State + HasExecutions,
EMH: EventManagerHooksTuple<S> + Clone + Copy, EMH: EventManagerHooksTuple<S> + Clone + Copy,
CF: FnOnce(Option<S>, LlmpRestartingEventManager<EMH, S, SP>, CoreId) -> Result<(), Error>, CF: FnOnce(
Option<S>,
LlmpRestartingEventManager<EMH, S, SP>,
ClientDescription,
) -> Result<(), Error>,
{ {
if self.cores.ids.is_empty() { if self.cores.ids.is_empty() {
return Err(Error::illegal_argument( return Err(Error::illegal_argument(
@ -231,10 +271,10 @@ where
let debug_output = std::env::var(LIBAFL_DEBUG_OUTPUT).is_ok(); let debug_output = std::env::var(LIBAFL_DEBUG_OUTPUT).is_ok();
// Spawn clients // Spawn clients
let mut index = 0_u64; let mut index = 0_usize;
for (id, bind_to) in core_ids.iter().enumerate() { for bind_to in core_ids {
if self.cores.ids.iter().any(|&x| x == id.into()) { if self.cores.ids.iter().any(|&x| x == bind_to) {
for _ in 0..self.overcommit { for overcommit_id in 0..self.overcommit {
index += 1; index += 1;
self.shmem_provider.pre_fork()?; self.shmem_provider.pre_fork()?;
// # Safety // # Safety
@ -243,7 +283,9 @@ where
ForkResult::Parent(child) => { ForkResult::Parent(child) => {
self.shmem_provider.post_fork(false)?; self.shmem_provider.post_fork(false)?;
handles.push(child.pid); handles.push(child.pid);
log::info!("child spawned and bound to core {id}"); log::info!(
"child spawned with id {index} and bound to core {bind_to:?}"
);
} }
ForkResult::Child => { ForkResult::Child => {
// # Safety // # Safety
@ -251,7 +293,9 @@ where
log::info!("{:?} PostFork", unsafe { libc::getpid() }); log::info!("{:?} PostFork", unsafe { libc::getpid() });
self.shmem_provider.post_fork(true)?; self.shmem_provider.post_fork(true)?;
std::thread::sleep(Duration::from_millis(index * self.launch_delay)); std::thread::sleep(Duration::from_millis(
index as u64 * self.launch_delay,
));
if !debug_output { if !debug_output {
if let Some(file) = &self.opened_stdout_file { if let Some(file) = &self.opened_stdout_file {
@ -264,12 +308,15 @@ where
} }
} }
let client_description =
ClientDescription::new(index, overcommit_id, bind_to);
// Fuzzer client. keeps retrying the connection to broker till the broker starts // Fuzzer client. keeps retrying the connection to broker till the broker starts
let builder = RestartingMgr::<EMH, MT, S, SP>::builder() let builder = RestartingMgr::<EMH, MT, S, SP>::builder()
.shmem_provider(self.shmem_provider.clone()) .shmem_provider(self.shmem_provider.clone())
.broker_port(self.broker_port) .broker_port(self.broker_port)
.kind(ManagerKind::Client { .kind(ManagerKind::Client {
cpu_core: Some(*bind_to), client_description: client_description.clone(),
}) })
.configuration(self.configuration) .configuration(self.configuration)
.serialize_state(self.serialize_state) .serialize_state(self.serialize_state)
@ -277,7 +324,11 @@ where
let builder = builder.time_ref(self.time_ref.clone()); let builder = builder.time_ref(self.time_ref.clone());
let (state, mgr) = builder.build().launch()?; let (state, mgr) = builder.build().launch()?;
return (self.run_client.take().unwrap())(state, mgr, *bind_to); return (self.run_client.take().unwrap())(
state,
mgr,
client_description,
);
} }
}; };
} }
@ -329,12 +380,16 @@ where
/// Launch the broker and the clients and fuzz /// Launch the broker and the clients and fuzz
#[cfg(any(windows, not(feature = "fork")))] #[cfg(any(windows, not(feature = "fork")))]
#[allow(unused_mut, clippy::match_wild_err_arm, clippy::too_many_lines)] #[allow(clippy::too_many_lines, clippy::match_wild_err_arm)]
pub fn launch_with_hooks<EMH, S>(&mut self, hooks: EMH) -> Result<(), Error> pub fn launch_with_hooks<EMH, S>(&mut self, hooks: EMH) -> Result<(), Error>
where where
S: State + HasExecutions, S: State + HasExecutions,
EMH: EventManagerHooksTuple<S> + Clone + Copy, EMH: EventManagerHooksTuple<S> + Clone + Copy,
CF: FnOnce(Option<S>, LlmpRestartingEventManager<EMH, S, SP>, CoreId) -> Result<(), Error>, CF: FnOnce(
Option<S>,
LlmpRestartingEventManager<EMH, S, SP>,
ClientDescription,
) -> Result<(), Error>,
{ {
use libafl_bolts::core_affinity; use libafl_bolts::core_affinity;
@ -342,14 +397,14 @@ where
let mut handles = match is_client { let mut handles = match is_client {
Ok(core_conf) => { Ok(core_conf) => {
let core_id = core_conf.parse()?; let client_description = ClientDescription::from_safe_string(&core_conf);
// the actual client. do the fuzzing // the actual client. do the fuzzing
let builder = RestartingMgr::<EMH, MT, S, SP>::builder() let builder = RestartingMgr::<EMH, MT, S, SP>::builder()
.shmem_provider(self.shmem_provider.clone()) .shmem_provider(self.shmem_provider.clone())
.broker_port(self.broker_port) .broker_port(self.broker_port)
.kind(ManagerKind::Client { .kind(ManagerKind::Client {
cpu_core: Some(CoreId(core_id)), client_description: client_description.clone(),
}) })
.configuration(self.configuration) .configuration(self.configuration)
.serialize_state(self.serialize_state) .serialize_state(self.serialize_state)
@ -359,14 +414,13 @@ where
let (state, mgr) = builder.build().launch()?; let (state, mgr) = builder.build().launch()?;
return (self.run_client.take().unwrap())(state, mgr, CoreId(core_id)); return (self.run_client.take().unwrap())(state, mgr, client_description);
} }
Err(std::env::VarError::NotPresent) => { Err(std::env::VarError::NotPresent) => {
// I am a broker // I am a broker
// before going to the broker loop, spawn n clients // before going to the broker loop, spawn n clients
let core_ids = core_affinity::get_core_ids().unwrap(); let core_ids = core_affinity::get_core_ids().unwrap();
let num_cores = core_ids.len();
let mut handles = vec![]; let mut handles = vec![];
log::info!("spawning on cores: {:?}", self.cores); log::info!("spawning on cores: {:?}", self.cores);
@ -393,10 +447,13 @@ where
} }
} }
//spawn clients //spawn clients
for (id, _) in core_ids.iter().enumerate().take(num_cores) { let mut index = 0;
if self.cores.ids.iter().any(|&x| x == id.into()) { for core_id in core_ids {
for _ in 0..self.overcommit { if self.cores.ids.iter().any(|&x| x == core_id) {
for overcommit_i in 0..self.overcommit {
index += 1;
// Forward own stdio to child processes, if requested by user // Forward own stdio to child processes, if requested by user
#[allow(unused_mut)]
let (mut stdout, mut stderr) = (Stdio::null(), Stdio::null()); let (mut stdout, mut stderr) = (Stdio::null(), Stdio::null());
#[cfg(unix)] #[cfg(unix)]
{ {
@ -407,10 +464,15 @@ where
} }
std::thread::sleep(Duration::from_millis( std::thread::sleep(Duration::from_millis(
id as u64 * self.launch_delay, core_id.0 as u64 * self.launch_delay,
)); ));
std::env::set_var(_AFL_LAUNCHER_CLIENT, id.to_string()); let client_description =
ClientDescription::new(index, overcommit_i, core_id);
std::env::set_var(
_AFL_LAUNCHER_CLIENT,
client_description.to_safe_string(),
);
let mut child = startable_self()?; let mut child = startable_self()?;
let child = (if debug_output { let child = (if debug_output {
&mut child &mut child
@ -476,9 +538,8 @@ where
/// ///
/// Provides a Launcher, which can be used to launch a fuzzing run on a specified list of cores with a single main and multiple secondary nodes /// Provides a Launcher, which can be used to launch a fuzzing run on a specified list of cores with a single main and multiple secondary nodes
/// This is for centralized, the 4th argument of the closure should mean if this is the main node. /// This is for centralized, the 4th argument of the closure should mean if this is the main node.
#[cfg(all(unix, feature = "std", feature = "fork"))] #[cfg(all(unix, feature = "fork"))]
#[derive(TypedBuilder)] #[derive(TypedBuilder)]
#[allow(clippy::type_complexity, missing_debug_implementations)]
pub struct CentralizedLauncher<'a, CF, MF, MT, SP> { pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
/// The `ShmemProvider` to use /// The `ShmemProvider` to use
shmem_provider: SP, shmem_provider: SP,
@ -506,6 +567,9 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
time_obs: Option<Handle<TimeObserver>>, time_obs: Option<Handle<TimeObserver>>,
/// The list of cores to run on /// The list of cores to run on
cores: &'a Cores, cores: &'a Cores,
/// The number of clients to spawn on each core
#[builder(default = 1)]
overcommit: usize,
/// A file name to write all client output to /// A file name to write all client output to
#[builder(default = None)] #[builder(default = None)]
stdout_file: Option<&'a str>, stdout_file: Option<&'a str>,
@ -513,7 +577,7 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
#[builder(default = 10)] #[builder(default = 10)]
launch_delay: u64, launch_delay: u64,
/// The actual, opened, `stdout_file` - so that we keep it open until the end /// The actual, opened, `stdout_file` - so that we keep it open until the end
#[cfg(all(unix, feature = "std", feature = "fork"))] #[cfg(all(unix, feature = "fork"))]
#[builder(setter(skip), default = None)] #[builder(setter(skip), default = None)]
opened_stdout_file: Option<File>, opened_stdout_file: Option<File>,
/// A file name to write all client stderr output to. If not specified, output is sent to /// A file name to write all client stderr output to. If not specified, output is sent to
@ -521,7 +585,7 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
#[builder(default = None)] #[builder(default = None)]
stderr_file: Option<&'a str>, stderr_file: Option<&'a str>,
/// The actual, opened, `stdout_file` - so that we keep it open until the end /// The actual, opened, `stdout_file` - so that we keep it open until the end
#[cfg(all(unix, feature = "std", feature = "fork"))] #[cfg(all(unix, feature = "fork"))]
#[builder(setter(skip), default = None)] #[builder(setter(skip), default = None)]
opened_stderr_file: Option<File>, opened_stderr_file: Option<File>,
/// The `ip:port` address of another broker to connect our new broker to for multi-machine /// The `ip:port` address of another broker to connect our new broker to for multi-machine
@ -541,13 +605,14 @@ pub struct CentralizedLauncher<'a, CF, MF, MT, SP> {
serialize_state: LlmpShouldSaveState, serialize_state: LlmpShouldSaveState,
} }
#[cfg(all(unix, feature = "std", feature = "fork"))] #[cfg(all(unix, feature = "fork"))]
impl<CF, MF, MT, SP> Debug for CentralizedLauncher<'_, CF, MF, MT, SP> { impl<CF, MF, MT, SP> Debug for CentralizedLauncher<'_, CF, MF, MT, SP> {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("Launcher") f.debug_struct("Launcher")
.field("configuration", &self.configuration) .field("configuration", &self.configuration)
.field("broker_port", &self.broker_port) .field("broker_port", &self.broker_port)
.field("core", &self.cores) .field("cores", &self.cores)
.field("overcommit", &self.overcommit)
.field("spawn_broker", &self.spawn_broker) .field("spawn_broker", &self.spawn_broker)
.field("remote_broker_addr", &self.remote_broker_addr) .field("remote_broker_addr", &self.remote_broker_addr)
.field("stdout_file", &self.stdout_file) .field("stdout_file", &self.stdout_file)
@ -559,7 +624,7 @@ impl<CF, MF, MT, SP> Debug for CentralizedLauncher<'_, CF, MF, MT, SP> {
/// The standard inner manager of centralized /// The standard inner manager of centralized
pub type StdCentralizedInnerMgr<S, SP> = LlmpRestartingEventManager<(), S, SP>; pub type StdCentralizedInnerMgr<S, SP> = LlmpRestartingEventManager<(), S, SP>;
#[cfg(all(unix, feature = "std", feature = "fork"))] #[cfg(all(unix, feature = "fork"))]
impl<CF, MF, MT, SP> CentralizedLauncher<'_, CF, MF, MT, SP> impl<CF, MF, MT, SP> CentralizedLauncher<'_, CF, MF, MT, SP>
where where
MT: Monitor + Clone + 'static, MT: Monitor + Clone + 'static,
@ -573,37 +638,36 @@ where
CF: FnOnce( CF: FnOnce(
Option<S>, Option<S>,
CentralizedEventManager<StdCentralizedInnerMgr<S, SP>, (), S, SP>, CentralizedEventManager<StdCentralizedInnerMgr<S, SP>, (), S, SP>,
CoreId, ClientDescription,
) -> Result<(), Error>, ) -> Result<(), Error>,
MF: FnOnce( MF: FnOnce(
Option<S>, Option<S>,
CentralizedEventManager<StdCentralizedInnerMgr<S, SP>, (), S, SP>, CentralizedEventManager<StdCentralizedInnerMgr<S, SP>, (), S, SP>,
CoreId, ClientDescription,
) -> Result<(), Error>, ) -> Result<(), Error>,
{ {
let restarting_mgr_builder = |centralized_launcher: &Self, core_to_bind: CoreId| { let restarting_mgr_builder =
// Fuzzer client. keeps retrying the connection to broker till the broker starts |centralized_launcher: &Self, client_description: ClientDescription| {
let builder = RestartingMgr::<(), MT, S, SP>::builder() // Fuzzer client. keeps retrying the connection to broker till the broker starts
.always_interesting(centralized_launcher.always_interesting) let builder = RestartingMgr::<(), MT, S, SP>::builder()
.shmem_provider(centralized_launcher.shmem_provider.clone()) .always_interesting(centralized_launcher.always_interesting)
.broker_port(centralized_launcher.broker_port) .shmem_provider(centralized_launcher.shmem_provider.clone())
.kind(ManagerKind::Client { .broker_port(centralized_launcher.broker_port)
cpu_core: Some(core_to_bind), .kind(ManagerKind::Client { client_description })
}) .configuration(centralized_launcher.configuration)
.configuration(centralized_launcher.configuration) .serialize_state(centralized_launcher.serialize_state)
.serialize_state(centralized_launcher.serialize_state) .hooks(tuple_list!());
.hooks(tuple_list!());
let builder = builder.time_ref(centralized_launcher.time_obs.clone()); let builder = builder.time_ref(centralized_launcher.time_obs.clone());
builder.build().launch() builder.build().launch()
}; };
self.launch_generic(restarting_mgr_builder, restarting_mgr_builder) self.launch_generic(restarting_mgr_builder, restarting_mgr_builder)
} }
} }
#[cfg(all(unix, feature = "std", feature = "fork"))] #[cfg(all(unix, feature = "fork"))]
impl<CF, MF, MT, SP> CentralizedLauncher<'_, CF, MF, MT, SP> impl<CF, MF, MT, SP> CentralizedLauncher<'_, CF, MF, MT, SP>
where where
MT: Monitor + Clone + 'static, MT: Monitor + Clone + 'static,
@ -612,7 +676,6 @@ where
/// Launch a Centralized-based fuzzer. /// Launch a Centralized-based fuzzer.
/// - `main_inner_mgr_builder` will be called to build the inner manager of the main node. /// - `main_inner_mgr_builder` will be called to build the inner manager of the main node.
/// - `secondary_inner_mgr_builder` will be called to build the inner manager of the secondary nodes. /// - `secondary_inner_mgr_builder` will be called to build the inner manager of the secondary nodes.
#[allow(clippy::similar_names, clippy::too_many_lines)]
pub fn launch_generic<EM, EMB, S>( pub fn launch_generic<EM, EMB, S>(
&mut self, &mut self,
main_inner_mgr_builder: EMB, main_inner_mgr_builder: EMB,
@ -621,13 +684,17 @@ where
where where
S: State, S: State,
S::Input: Send + Sync + 'static, S::Input: Send + Sync + 'static,
CF: FnOnce(Option<S>, CentralizedEventManager<EM, (), S, SP>, CoreId) -> Result<(), Error>, CF: FnOnce(
Option<S>,
CentralizedEventManager<EM, (), S, SP>,
ClientDescription,
) -> Result<(), Error>,
EM: UsesState<State = S>, EM: UsesState<State = S>,
EMB: FnOnce(&Self, CoreId) -> Result<(Option<S>, EM), Error>, EMB: FnOnce(&Self, ClientDescription) -> Result<(Option<S>, EM), Error>,
MF: FnOnce( MF: FnOnce(
Option<S>, Option<S>,
CentralizedEventManager<EM, (), S, SP>, // No broker_hooks for centralized EM CentralizedEventManager<EM, (), S, SP>, // No broker_hooks for centralized EM
CoreId, ClientDescription,
) -> Result<(), Error>, ) -> Result<(), Error>,
<<EM as UsesState>::State as UsesInput>::Input: Send + Sync + 'static, <<EM as UsesState>::State as UsesInput>::Input: Send + Sync + 'static,
{ {
@ -647,7 +714,6 @@ where
} }
let core_ids = get_core_ids().unwrap(); let core_ids = get_core_ids().unwrap();
let num_cores = core_ids.len();
let mut handles = vec![]; let mut handles = vec![];
log::debug!("spawning on cores: {:?}", self.cores); log::debug!("spawning on cores: {:?}", self.cores);
@ -662,78 +728,101 @@ where
let debug_output = std::env::var(LIBAFL_DEBUG_OUTPUT).is_ok(); let debug_output = std::env::var(LIBAFL_DEBUG_OUTPUT).is_ok();
// Spawn clients // Spawn clients
let mut index = 0_u64; let mut index = 0_usize;
for (id, bind_to) in core_ids.iter().enumerate().take(num_cores) { for bind_to in core_ids {
if self.cores.ids.iter().any(|&x| x == id.into()) { if self.cores.ids.iter().any(|&x| x == bind_to) {
index += 1; for overcommit_id in 0..self.overcommit {
self.shmem_provider.pre_fork()?; index += 1;
match unsafe { fork() }? { self.shmem_provider.pre_fork()?;
ForkResult::Parent(child) => { match unsafe { fork() }? {
self.shmem_provider.post_fork(false)?; ForkResult::Parent(child) => {
handles.push(child.pid); self.shmem_provider.post_fork(false)?;
#[cfg(feature = "std")] handles.push(child.pid);
log::info!("child spawned and bound to core {id}"); log::info!(
} "child with client id {index} spawned and bound to core {bind_to:?}"
ForkResult::Child => { );
log::info!("{:?} PostFork", unsafe { libc::getpid() }); }
self.shmem_provider.post_fork(true)?; ForkResult::Child => {
log::info!("{:?} PostFork", unsafe { libc::getpid() });
self.shmem_provider.post_fork(true)?;
std::thread::sleep(Duration::from_millis(index * self.launch_delay)); std::thread::sleep(Duration::from_millis(
index as u64 * self.launch_delay,
));
if !debug_output { if !debug_output {
if let Some(file) = &self.opened_stdout_file { if let Some(file) = &self.opened_stdout_file {
dup2(file.as_raw_fd(), libc::STDOUT_FILENO)?; dup2(file.as_raw_fd(), libc::STDOUT_FILENO)?;
if let Some(stderr) = &self.opened_stderr_file { if let Some(stderr) = &self.opened_stderr_file {
dup2(stderr.as_raw_fd(), libc::STDERR_FILENO)?; dup2(stderr.as_raw_fd(), libc::STDERR_FILENO)?;
} else { } else {
dup2(file.as_raw_fd(), libc::STDERR_FILENO)?; dup2(file.as_raw_fd(), libc::STDERR_FILENO)?;
}
} }
} }
}
if index == 1 { let client_description =
// Main client ClientDescription::new(index, overcommit_id, bind_to);
log::debug!("Running main client on PID {}", std::process::id());
let (state, mgr) =
main_inner_mgr_builder.take().unwrap()(self, *bind_to)?;
let mut centralized_event_manager_builder = if index == 1 {
CentralizedEventManager::builder(); // Main client
centralized_event_manager_builder = log::debug!("Running main client on PID {}", std::process::id());
centralized_event_manager_builder.is_main(true); let (state, mgr) = main_inner_mgr_builder.take().unwrap()(
self,
client_description.clone(),
)?;
let c_mgr = centralized_event_manager_builder.build_on_port( let mut centralized_event_manager_builder =
mgr, CentralizedEventManager::builder();
// tuple_list!(multi_machine_event_manager_hook.take().unwrap()), centralized_event_manager_builder =
tuple_list!(), centralized_event_manager_builder.is_main(true);
self.shmem_provider.clone(),
self.centralized_broker_port,
self.time_obs.clone(),
)?;
self.main_run_client.take().unwrap()(state, c_mgr, *bind_to)?; let c_mgr = centralized_event_manager_builder.build_on_port(
Err(Error::shutting_down()) mgr,
} else { // tuple_list!(multi_machine_event_manager_hook.take().unwrap()),
// Secondary clients tuple_list!(),
log::debug!("Running secondary client on PID {}", std::process::id()); self.shmem_provider.clone(),
let (state, mgr) = self.centralized_broker_port,
secondary_inner_mgr_builder.take().unwrap()(self, *bind_to)?; self.time_obs.clone(),
)?;
let centralized_builder = CentralizedEventManager::builder(); self.main_run_client.take().unwrap()(
state,
c_mgr,
client_description,
)?;
Err(Error::shutting_down())
} else {
// Secondary clients
log::debug!(
"Running secondary client on PID {}",
std::process::id()
);
let (state, mgr) = secondary_inner_mgr_builder.take().unwrap()(
self,
client_description.clone(),
)?;
let c_mgr = centralized_builder.build_on_port( let centralized_builder = CentralizedEventManager::builder();
mgr,
tuple_list!(),
self.shmem_provider.clone(),
self.centralized_broker_port,
self.time_obs.clone(),
)?;
self.secondary_run_client.take().unwrap()(state, c_mgr, *bind_to)?; let c_mgr = centralized_builder.build_on_port(
Err(Error::shutting_down()) mgr,
} tuple_list!(),
}?, self.shmem_provider.clone(),
}; self.centralized_broker_port,
self.time_obs.clone(),
)?;
self.secondary_run_client.take().unwrap()(
state,
c_mgr,
client_description,
)?;
Err(Error::shutting_down())
}
}?,
};
}
} }
} }

View File

@ -39,9 +39,9 @@ use crate::events::EVENTMGR_SIGHANDLER_STATE;
use crate::events::{AdaptiveSerializer, CustomBufEventResult, HasCustomBufHandlers}; use crate::events::{AdaptiveSerializer, CustomBufEventResult, HasCustomBufHandlers};
use crate::{ use crate::{
events::{ events::{
Event, EventConfig, EventFirer, EventManager, EventManagerHooksTuple, EventManagerId, launcher::ClientDescription, Event, EventConfig, EventFirer, EventManager,
EventProcessor, EventRestarter, HasEventManagerId, LlmpEventManager, LlmpShouldSaveState, EventManagerHooksTuple, EventManagerId, EventProcessor, EventRestarter, HasEventManagerId,
ProgressReporter, StdLlmpEventHook, LlmpEventManager, LlmpShouldSaveState, ProgressReporter, StdLlmpEventHook,
}, },
executors::{Executor, HasObservers}, executors::{Executor, HasObservers},
fuzzer::{Evaluator, EvaluatorObservers, ExecutionProcessor}, fuzzer::{Evaluator, EvaluatorObservers, ExecutionProcessor},
@ -322,14 +322,14 @@ where
/// The kind of manager we're creating right now /// The kind of manager we're creating right now
#[cfg(feature = "std")] #[cfg(feature = "std")]
#[derive(Debug, Clone, Copy)] #[derive(Debug, Clone)]
pub enum ManagerKind { pub enum ManagerKind {
/// Any kind will do /// Any kind will do
Any, Any,
/// A client, getting messages from a local broker. /// A client, getting messages from a local broker.
Client { Client {
/// The CPU core ID of this client /// The client description
cpu_core: Option<CoreId>, client_description: ClientDescription,
}, },
/// An [`LlmpBroker`], forwarding the packets of local clients. /// An [`LlmpBroker`], forwarding the packets of local clients.
Broker, Broker,
@ -481,7 +481,7 @@ where
Err(Error::shutting_down()) Err(Error::shutting_down())
}; };
// We get here if we are on Unix, or we are a broker on Windows (or without forks). // We get here if we are on Unix, or we are a broker on Windows (or without forks).
let (mgr, core_id) = match self.kind { let (mgr, core_id) = match &self.kind {
ManagerKind::Any => { ManagerKind::Any => {
let connection = let connection =
LlmpConnection::on_port(self.shmem_provider.clone(), self.broker_port)?; LlmpConnection::on_port(self.shmem_provider.clone(), self.broker_port)?;
@ -528,7 +528,7 @@ where
broker_things(broker, self.remote_broker_addr)?; broker_things(broker, self.remote_broker_addr)?;
unreachable!("The broker may never return normally, only on errors or when shutting down."); unreachable!("The broker may never return normally, only on errors or when shutting down.");
} }
ManagerKind::Client { cpu_core } => { ManagerKind::Client { client_description } => {
// We are a client // We are a client
let mgr = LlmpEventManager::builder() let mgr = LlmpEventManager::builder()
.always_interesting(self.always_interesting) .always_interesting(self.always_interesting)
@ -540,7 +540,7 @@ where
self.time_ref.clone(), self.time_ref.clone(),
)?; )?;
(mgr, cpu_core) (mgr, Some(client_description.core_id()))
} }
}; };

View File

@ -40,7 +40,6 @@ use libafl_bolts::os::CTRL_C_EXIT;
use libafl_bolts::{ use libafl_bolts::{
current_time, current_time,
tuples::{Handle, MatchNameRef}, tuples::{Handle, MatchNameRef},
ClientId,
}; };
use serde::{Deserialize, Serialize}; use serde::{Deserialize, Serialize};
#[cfg(feature = "std")] #[cfg(feature = "std")]
@ -288,7 +287,7 @@ where
/// The time of generation of the event /// The time of generation of the event
time: Duration, time: Duration,
/// The original sender if, if forwarded /// The original sender if, if forwarded
forward_id: Option<ClientId>, forward_id: Option<libafl_bolts::ClientId>,
/// The (multi-machine) node from which the tc is from, if any /// The (multi-machine) node from which the tc is from, if any
#[cfg(all(unix, feature = "std", feature = "multi_machine"))] #[cfg(all(unix, feature = "std", feature = "multi_machine"))]
node_id: Option<NodeId>, node_id: Option<NodeId>,