Add Intel PT tracing support (#2471)

* WIP: IntelPT qemu systemmode

* use perf-event-open-sys instead of bindgen

* intelPT Add enable and disable tracing, add test

* Use static_assertions crate

* Fix volatiles, finish test

* Add Intel PT availability check

* Use LibAFL errors in Result

* Improve filtering

* Add KVM pt_mode check

* move static_assertions use

* Check for perf_event_open support

* Add (empty) IntelPT module

* Add IntelPTModule POC

* partial ideas to implement intel pt

* forgot smth

* trace decoding draft

* add libipt decoder

* use cpuid instead of reading /proc/cpuinfo

* investigating nondeterministic behaviour

* intel_pt module add thread creation hook

* Fully identify deps versions

Cargo docs: Although it looks like a specific version of the crate, it actually specifies a range of versions and allows SemVer compatible updates

* Move mem image to module, output to file for debug

* fixup! Use static_assertions crate

* Exclude host kernel from traces

* Bump libipt-rs

* Callback to get memory as an alterantive to image

* WIP Add bootloader fuzzer example

* Split availability check: add availability_with_qemu

* Move IntelPT to observer

* Improve test docs

* Clippy happy now

* Taplo happy now

* Add IntelPTObserver boilerplate

* Hook instead of Observer

* Clippy & Taplo

* Add psb_freq setting

* Extremely bad and dirty babyfuzzer stealing

* Use thread local cell instead of mutex

* Try a trace diff based naive feedback

* fix perf aux buffer wrap handling

* Use f64 for feedback score

* Fix clippy for cargo test

* Add config format tests

* WIP intelpt babyfuzzer with fork

* Fix not wrapped tail offset in split buffer

* Baby PT with raw traces diff working

* Cache nr_filters

* Use Lazy_lock for perf_type

* Add baby_fuzzer_intel_pt

* restore baby fuzzer

* baby_fuzzer with block decoder

* instruction decoder instead of block

* Fix after upstream merge

* OwnedRefMut instead of Cow

* Read mem directly instead of going through files

* Fix cache lifetime and tail update

* clippy

* Taplo

* Compile caps only on linux

* clippy

* Fail compilation on unsupported OSes

* Add baby_fuzzer_intel_pt to CI

* Cleanup

* Move intel pt + linux check

* fix baby pt

* rollback forkexecutor

* Remove unused dep

* Cleanup

* Lints

* Compute an edge id instead of using only block ip

* Binary only intelPT POC

* put linux specific code behind target_os=linux

* Clippy & Taplo

* fix CI

* Disable relocation

* No unwrap in decode

* No expect in decode

* Better logging, smaller aux buffer

* add IntelPTBuilder

* some lints

* Add exclude_hv config

* Per CPU tracing and inheritance

* Parametrize buffer size

* Try not to break commandExecutor API pt.1

* Try not to break commandExecutor API pt.2

* Try not to break commandExecutor API pt.3

* fix baby PT

* Support on_crash & on_timeout callbacks for libafl_qemu modules (#2620)

* support (unsafe) on_crash / on_timeout callbacks for modules

* use libc types in bindgen

* Move common code to bolts

* Cleanup

* Revert changes to backtrace_baby_fuzzers/command_executor

* Move intel_pt in one file

* Use workspace deps

* add nr_addr_filter fallback

* Cleaning

* Improve decode

* Clippy

* Improve errors and docs

* Impl from<PtError> for libafl::Error

* Merge hooks

* Docs

* Clean command executor

* fix baby PT

* fix baby PT warnings

* decoder fills the map with no vec alloc

* WIP command executor intel PT

* filter_map() instead of filter().map()

* fix docs

* fix windows?

* Baby lints

* Small cleanings

* Use personality to disable ASLR at runtime

* Fix nix dep

* Use prc-maps in babyfuzzer

* working ET_DYN elf

* Cleanup Cargo.toml

* Clean command executor

* introduce PtraceCommandConfigurator

* Fix clippy & taplo

* input via stdin

* libipt as workspace dep

* Check kernel version

* support Arg input location

* Reorder stuff

* File input

* timeout support for PtraceExec

* Lints

* Move out method not needing self form IntelPT

* unimplemented

* Lints

* Move intel_pt_baby_fuzzer

* Move intel_pt_command_executor

* Document the need for smp_rmb

* Better comment

* Readme and Makefile.toml instead of build.rs

* Move out from libafl_bolts to libafl_intelpt

* Fix hooks

* (Almost) fix intel_pt command exec

* fix intel_pt command exec debug

* Fix baby_fuzzer

* &raw over addr_of!

* cfg(target_os = "linux")

* bolts Cargo.toml leftover

* minimum wage README.md

* extract join_split_trace from decode

* extract decode_block from decode

* add 1 to `previous_block_ip` to avoid that all the recursive basic blocks map to 0

* More generic hook

* fix windows

* Update CI, fmt

* No bitbybit

* Fix docker?

* Fix Apple silicon?

* Use old libipt from crates.io

---------

Co-authored-by: Romain Malmain <romain.malmain@pm.me>
Co-authored-by: Dominik Maier <domenukk@gmail.com>
This commit is contained in:
Marco C. 2024-11-13 02:34:46 +01:00 committed by GitHub
parent 5eff9c03d3
commit f7f8dff6cd
No known key found for this signature in database
GPG Key ID: B5690EEEBB952194
24 changed files with 1981 additions and 26 deletions

View File

@ -258,6 +258,8 @@ jobs:
- ./fuzzers/binary_only/frida_windows_gdiplus - ./fuzzers/binary_only/frida_windows_gdiplus
- ./fuzzers/binary_only/frida_libpng - ./fuzzers/binary_only/frida_libpng
- ./fuzzers/binary_only/fuzzbench_qemu - ./fuzzers/binary_only/fuzzbench_qemu
- ./fuzzers/binary_only/intel_pt_baby_fuzzer
- ./fuzzers/binary_only/intel_pt_command_executor
- ./fuzzers/binary_only/tinyinst_simple - ./fuzzers/binary_only/tinyinst_simple
# Forkserver # Forkserver

View File

@ -5,7 +5,7 @@ runs:
steps: steps:
- name: Install and cache deps - name: Install and cache deps
shell: bash shell: bash
run: sudo apt-get update && sudo apt-get install -y curl lsb-release wget software-properties-common gnupg ninja-build shellcheck pax-utils nasm libsqlite3-dev libc6-dev libgtk-3-dev gcc g++ gcc-arm-none-eabi gcc-arm-linux-gnueabi g++-arm-linux-gnueabi libslirp-dev libz3-dev build-essential run: sudo apt-get update && sudo apt-get install -y curl lsb-release wget software-properties-common gnupg ninja-build shellcheck pax-utils nasm libsqlite3-dev libc6-dev libgtk-3-dev gcc g++ gcc-arm-none-eabi gcc-arm-linux-gnueabi g++-arm-linux-gnueabi libslirp-dev libz3-dev build-essential cmake
- uses: dtolnay/rust-toolchain@stable - uses: dtolnay/rust-toolchain@stable
- name: Add stable clippy - name: Add stable clippy
shell: bash shell: bash

View File

@ -8,6 +8,7 @@ members = [
"libafl_concolic/symcc_libafl", "libafl_concolic/symcc_libafl",
"libafl_derive", "libafl_derive",
"libafl_frida", "libafl_frida",
"libafl_intelpt",
"libafl_libfuzzer", "libafl_libfuzzer",
"libafl_nyx", "libafl_nyx",
"libafl_targets", "libafl_targets",
@ -49,6 +50,7 @@ exclude = [
[workspace.package] [workspace.package]
version = "0.13.2" version = "0.13.2"
license = "MIT OR Apache-2.0"
[workspace.dependencies] [workspace.dependencies]
ahash = { version = "0.8.11", default-features = false } # The hash function already used in hashbrown ahash = { version = "0.8.11", default-features = false } # The hash function already used in hashbrown
@ -60,6 +62,7 @@ cmake = "0.1.51"
document-features = "0.2.10" document-features = "0.2.10"
hashbrown = { version = "0.14.5", default-features = false } # A faster hashmap, nostd compatible hashbrown = { version = "0.14.5", default-features = false } # A faster hashmap, nostd compatible
libc = "0.2.159" # For (*nix) libc libc = "0.2.159" # For (*nix) libc
libipt = "0.1.4"
log = "0.4.22" log = "0.4.22"
meminterval = "0.4.1" meminterval = "0.4.1"
mimalloc = { version = "0.1.43", default-features = false } mimalloc = { version = "0.1.43", default-features = false }
@ -77,6 +80,7 @@ serde = { version = "1.0.210", default-features = false } # serialization lib
serial_test = { version = "3.1.1", default-features = false } serial_test = { version = "3.1.1", default-features = false }
serde_json = { version = "1.0.128", default-features = false } serde_json = { version = "1.0.128", default-features = false }
serde_yaml = { version = "0.9.34" } # For parsing the injections yaml file serde_yaml = { version = "0.9.34" } # For parsing the injections yaml file
static_assertions = "1.1.0"
strum = "0.26.3" strum = "0.26.3"
strum_macros = "0.26.4" strum_macros = "0.26.4"
toml = "0.8.19" # For parsing the injections toml file toml = "0.8.19" # For parsing the injections toml file

View File

@ -52,6 +52,9 @@ COPY libafl_frida/Cargo.toml libafl_frida/build.rs libafl_frida/
COPY scripts/dummy.rs libafl_frida/src/lib.rs COPY scripts/dummy.rs libafl_frida/src/lib.rs
COPY libafl_frida/src/gettls.c libafl_frida/src/gettls.c COPY libafl_frida/src/gettls.c libafl_frida/src/gettls.c
COPY libafl_intelpt/Cargo.toml libafl_intelpt/README.md libafl_intelpt/
COPY scripts/dummy.rs libafl_intelpt/src/lib.rs
COPY libafl_qemu/Cargo.toml libafl_qemu/build.rs libafl_qemu/build_linux.rs libafl_qemu/ COPY libafl_qemu/Cargo.toml libafl_qemu/build.rs libafl_qemu/build_linux.rs libafl_qemu/
COPY scripts/dummy.rs libafl_qemu/src/lib.rs COPY scripts/dummy.rs libafl_qemu/src/lib.rs
@ -144,6 +147,8 @@ COPY libafl_libfuzzer/src libafl_libfuzzer/src
COPY libafl_libfuzzer/runtime libafl_libfuzzer/runtime COPY libafl_libfuzzer/runtime libafl_libfuzzer/runtime
COPY libafl_libfuzzer/build.rs libafl_libfuzzer/build.rs COPY libafl_libfuzzer/build.rs libafl_libfuzzer/build.rs
RUN touch libafl_libfuzzer/src/lib.rs RUN touch libafl_libfuzzer/src/lib.rs
COPY libafl_intelpt/src libafl_intelpt/src
RUN touch libafl_intelpt/src/lib.rs
RUN cargo build && cargo build --release RUN cargo build && cargo build --release
# Copy fuzzers over # Copy fuzzers over

View File

@ -0,0 +1,19 @@
[package]
name = "intel_pt_baby_fuzzer"
version = "0.13.2"
authors = [
"Andrea Fioraldi <andreafioraldi@gmail.com>",
"Dominik Maier <domenukk@gmail.com>",
"Marco Cavenati <cavenatimarco@gmail.com>",
]
edition = "2021"
[features]
tui = []
[dependencies]
libafl = { path = "../../../libafl/", default-features = false, features = [
"intel_pt",
] }
libafl_bolts = { path = "../../../libafl_bolts" }
proc-maps = "0.4.0"

View File

@ -0,0 +1,15 @@
# Baby fuzzer with Intel PT tracing
This is a minimalistic example about how to create a libafl based fuzzer with Intel PT tracing.
It runs on a single core until a crash occurs and then exits.
The tested program is a simple Rust function without any instrumentation.
After building this example with `cargo build`, you need to give to the executable the necessary capabilities with
`sudo setcap cap_ipc_lock,cap_sys_ptrace,cap_sys_admin,cap_syslog=ep ./target/debug/intel_pt_baby_fuzzer`.
You can run this example using `cargo run`, and you can enable the TUI feature by building and running with
`--features tui`.
This fuzzer is compatible with Linux hosts only having an Intel PT compatible CPU.

View File

@ -0,0 +1,153 @@
use std::{hint::black_box, num::NonZero, path::PathBuf, process, time::Duration};
#[cfg(feature = "tui")]
use libafl::monitors::tui::TuiMonitor;
#[cfg(not(feature = "tui"))]
use libafl::monitors::SimpleMonitor;
use libafl::{
corpus::{InMemoryCorpus, OnDiskCorpus},
events::SimpleEventManager,
executors::{
hooks::intel_pt::{IntelPTHook, Section},
inprocess::GenericInProcessExecutor,
ExitKind,
},
feedbacks::{CrashFeedback, MaxMapFeedback},
fuzzer::{Fuzzer, StdFuzzer},
generators::RandPrintablesGenerator,
inputs::{BytesInput, HasTargetBytes},
mutators::{havoc_mutations::havoc_mutations, scheduled::StdScheduledMutator},
observers::StdMapObserver,
schedulers::QueueScheduler,
stages::mutational::StdMutationalStage,
state::StdState,
};
use libafl_bolts::{current_nanos, rands::StdRand, tuples::tuple_list, AsSlice};
use proc_maps::get_process_maps;
// Coverage map
const MAP_SIZE: usize = 4096;
static mut MAP: [u8; MAP_SIZE] = [0; MAP_SIZE];
#[allow(static_mut_refs)]
static mut MAP_PTR: *mut u8 = unsafe { MAP.as_mut_ptr() };
pub fn main() {
// The closure that we want to fuzz
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let buf = target.as_slice();
if !buf.is_empty() && buf[0] == b'a' {
let _do_something = black_box(0);
if buf.len() > 1 && buf[1] == b'b' {
let _do_something = black_box(0);
if buf.len() > 2 && buf[2] == b'c' {
panic!("Artificial bug triggered =)");
}
}
}
ExitKind::Ok
};
// Create an observation channel using the map
let observer = unsafe { StdMapObserver::from_mut_ptr("signals", MAP_PTR, MAP_SIZE) };
// Feedback to rate the interestingness of an input
let mut feedback = MaxMapFeedback::new(&observer);
// A feedback to choose if an input is a solution or not
let mut objective = CrashFeedback::new();
// create a State from scratch
let mut state = StdState::new(
// RNG
StdRand::with_seed(current_nanos()),
// Corpus that will be evolved, we keep it in memory for performance
InMemoryCorpus::new(),
// Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new(PathBuf::from("./crashes")).unwrap(),
// States of the feedbacks.
// The feedbacks can report the data that should persist in the State.
&mut feedback,
// Same for objective feedbacks
&mut objective,
)
.unwrap();
// The Monitor trait define how the fuzzer stats are displayed to the user
#[cfg(not(feature = "tui"))]
let mon = SimpleMonitor::new(|s| println!("{s}"));
#[cfg(feature = "tui")]
let mon = TuiMonitor::builder()
.title("Baby Fuzzer Intel PT")
.enhanced_graphics(false)
.build();
// The event manager handle the various events generated during the fuzzing loop
// such as the notification of the addition of a new item to the corpus
let mut mgr = SimpleEventManager::new(mon);
// A queue policy to get testcases from the corpus
let scheduler = QueueScheduler::new();
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// Get the memory map of the current process
let my_pid = i32::try_from(process::id()).unwrap();
let process_maps = get_process_maps(my_pid).unwrap();
let sections = process_maps
.iter()
.filter_map(|pm| {
if pm.is_exec() && pm.filename().is_some() {
Some(Section {
file_path: pm.filename().unwrap().to_string_lossy().to_string(),
file_offset: pm.offset as u64,
size: pm.size() as u64,
virtual_address: pm.start() as u64,
})
} else {
None
}
})
.collect::<Vec<_>>();
// Intel PT hook that will handle the setup of Intel PT for each execution and fill the map
let pt_hook = unsafe {
IntelPTHook::builder()
.map_ptr(MAP_PTR)
.map_len(MAP_SIZE)
.image(&sections)
}
.build();
type PTInProcessExecutor<'a, H, OT, S, T> =
GenericInProcessExecutor<H, &'a mut H, (IntelPTHook<T>, ()), OT, S>;
// Create the executor for an in-process function with just one observer
let mut executor = PTInProcessExecutor::with_timeout_generic(
tuple_list!(pt_hook),
&mut harness,
tuple_list!(observer),
&mut fuzzer,
&mut state,
&mut mgr,
Duration::from_millis(5000),
)
.expect("Failed to create the Executor");
// Generator of printable bytearrays of max size 32
let mut generator = RandPrintablesGenerator::new(NonZero::new(32).unwrap());
// Generate 8 initial inputs
state
.generate_initial_inputs(&mut fuzzer, &mut executor, &mut generator, &mut mgr, 8)
.expect("Failed to generate the initial corpus");
// Set up a mutational stage with a basic bytes mutator
let mutator = StdScheduledMutator::new(havoc_mutations());
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
fuzzer
.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)
.expect("Error in the fuzzing loop");
}

View File

@ -0,0 +1,14 @@
[package]
name = "intel_pt_command_executor"
version = "0.1.0"
authors = ["Marco Cavenati <cavenatimarco@gmail.com>"]
edition = "2021"
[dependencies]
env_logger = "0.11.5"
libafl = { path = "../../../libafl", default-features = false, features = [
"intel_pt",
] }
libafl_bolts = { path = "../../../libafl_bolts" }
libafl_intelpt = { path = "../../../libafl_intelpt" }
log = { version = "0.4.22", features = ["release_max_level_info"] }

View File

@ -0,0 +1,33 @@
[env.development]
PROFILE_DIR = "debug"
[env.release]
PROFILE_DIR = "release"
[tasks.build_target]
command = "rustc"
args = [
"src/target_program.rs",
"--out-dir",
"${CARGO_MAKE_CRATE_TARGET_DIRECTORY}/${PROFILE_DIR}",
"-O",
]
[tasks.build_fuzzer]
command = "cargo"
args = ["build", "--profile", "${CARGO_MAKE_CARGO_PROFILE}"]
[tasks.build]
dependencies = ["build_fuzzer", "build_target"]
[tasks.setcap]
script = "sudo setcap cap_ipc_lock,cap_sys_ptrace,cap_sys_admin,cap_syslog=ep ${CARGO_MAKE_CRATE_TARGET_DIRECTORY}/${PROFILE_DIR}/${CARGO_MAKE_CRATE_NAME}"
dependencies = ["build_fuzzer"]
[tasks.run]
command = "cargo"
args = ["run", "--profile", "${CARGO_MAKE_CARGO_PROFILE}"]
dependencies = ["build", "setcap"]
[tasks.default]
alias = "run"

View File

@ -0,0 +1,21 @@
# Linux Binary-Only Fuzzer with Intel PT Tracing
This fuzzer is designed to target a Linux binary (without requiring source code instrumentation) and leverages Intel
Processor Trace (PT) to compute code coverage.
## Prerequisites
- A Linux host with an Intel Processor Trace (PT) compatible CPU
- `cargo-make` installed
- Sudo access to grant necessary capabilities to the fuzzer
## How to Run the Fuzzer
To compile and run the fuzzer (and the target program) execute the following command:
```sh
cargo make
```
> **Note**: This command may prompt you for your password to assign capabilities required for Intel PT. If you'd prefer
> not to run it with elevated permissions, you can review and execute the commands from `Makefile.toml`
> individually.

View File

@ -0,0 +1,146 @@
use std::{
env, ffi::CString, num::NonZero, os::unix::ffi::OsStrExt, path::PathBuf, time::Duration,
};
use libafl::{
corpus::{InMemoryCorpus, OnDiskCorpus},
events::SimpleEventManager,
executors::{
command::{CommandConfigurator, PTraceCommandConfigurator},
hooks::intel_pt::{IntelPTHook, Section},
},
feedbacks::{CrashFeedback, MaxMapFeedback},
fuzzer::{Fuzzer, StdFuzzer},
generators::RandPrintablesGenerator,
monitors::SimpleMonitor,
mutators::{havoc_mutations::havoc_mutations, scheduled::StdScheduledMutator},
observers::StdMapObserver,
schedulers::QueueScheduler,
stages::mutational::StdMutationalStage,
state::StdState,
};
use libafl_bolts::{core_affinity, rands::StdRand, tuples::tuple_list};
use libafl_intelpt::{IntelPT, PAGE_SIZE};
// Coverage map
const MAP_SIZE: usize = 4096;
static mut MAP: [u8; MAP_SIZE] = [0; MAP_SIZE];
#[allow(static_mut_refs)]
static mut MAP_PTR: *mut u8 = unsafe { MAP.as_mut_ptr() };
pub fn main() {
// Let's set the default logging level to `warn`
if env::var("RUST_LOG").is_err() {
env::set_var("RUST_LOG", "warn")
}
// Enable logging
env_logger::init();
let target_path = PathBuf::from(env::args().next().unwrap())
.parent()
.unwrap()
.join("target_program");
// We'll run the target on cpu (aka core) 0
let cpu = core_affinity::get_core_ids().unwrap()[0];
log::debug!("Using core {} for fuzzing", cpu.0);
// Create an observation channel using the map
let observer = unsafe { StdMapObserver::from_mut_ptr("signals", MAP_PTR, MAP_SIZE) };
// Feedback to rate the interestingness of an input
let mut feedback = MaxMapFeedback::new(&observer);
// A feedback to choose if an input is a solution or not
let mut objective = CrashFeedback::new();
// create a State from scratch
let mut state = StdState::new(
// RNG
StdRand::new(),
// Corpus that will be evolved, we keep it in memory for performance
InMemoryCorpus::new(),
// Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new(PathBuf::from("./crashes")).unwrap(),
// States of the feedbacks.
// The feedbacks can report the data that should persist in the State.
&mut feedback,
// Same for objective feedbacks
&mut objective,
)
.unwrap();
// The Monitor trait define how the fuzzer stats are displayed to the user
let mon = SimpleMonitor::new(|s| println!("{s}"));
// The event manager handle the various events generated during the fuzzing loop
// such as the notification of the addition of a new item to the corpus
let mut mgr = SimpleEventManager::new(mon);
// A queue policy to get testcases from the corpus
let scheduler = QueueScheduler::new();
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
let mut intel_pt = IntelPT::builder().cpu(cpu.0).inherit(true).build().unwrap();
// The target is a ET_DYN elf, it will be relocated by the loader with this offset.
// see https://github.com/torvalds/linux/blob/c1e939a21eb111a6d6067b38e8e04b8809b64c4e/arch/x86/include/asm/elf.h#L234C1-L239C38
const DEFAULT_MAP_WINDOW: usize = (1 << 47) - PAGE_SIZE;
const ELF_ET_DYN_BASE: usize = DEFAULT_MAP_WINDOW / 3 * 2 & !(PAGE_SIZE - 1);
// Set the instruction pointer (IP) filter and memory image of our target.
// These information can be retrieved from `readelf -l` (for example)
let code_memory_addresses = ELF_ET_DYN_BASE + 0x14000..=ELF_ET_DYN_BASE + 0x14000 + 0x40000;
intel_pt
.set_ip_filters(&[code_memory_addresses.clone()])
.unwrap();
let sections = [Section {
file_path: target_path.to_string_lossy().to_string(),
file_offset: 0x13000,
size: (*code_memory_addresses.end() - *code_memory_addresses.start() + 1) as u64,
virtual_address: *code_memory_addresses.start() as u64,
}];
let hook = unsafe { IntelPTHook::builder().map_ptr(MAP_PTR).map_len(MAP_SIZE) }
.intel_pt(intel_pt)
.image(&sections)
.build();
let target_cstring = CString::from(
target_path
.as_os_str()
.as_bytes()
.iter()
.map(|&b| NonZero::new(b).unwrap())
.collect::<Vec<_>>(),
);
let command_configurator = PTraceCommandConfigurator::builder()
.path(target_cstring)
.cpu(cpu)
.timeout(Duration::from_secs(2))
.build();
let mut executor =
command_configurator.into_executor_with_hooks(tuple_list!(observer), tuple_list!(hook));
// Generator of printable bytearrays of max size 32
let mut generator = RandPrintablesGenerator::new(NonZero::new(32).unwrap());
// Generate 8 initial inputs
state
.generate_initial_inputs(&mut fuzzer, &mut executor, &mut generator, &mut mgr, 8)
.expect("Failed to generate the initial corpus");
// Setup a mutational stage with a basic bytes mutator
let mutator = StdScheduledMutator::new(havoc_mutations());
let mut stages = tuple_list!(StdMutationalStage::new(mutator));
fuzzer
.fuzz_loop(&mut stages, &mut executor, &mut state, &mut mgr)
.expect("Error in the fuzzing loop");
}

View File

@ -0,0 +1,19 @@
use std::{
hint::black_box,
io::{stdin, Read},
};
fn main() {
let mut buf = Vec::new();
stdin().read_to_end(&mut buf).unwrap();
if !buf.is_empty() && buf[0] == b'a' {
let _do_something = black_box(0);
if buf.len() > 1 && buf[1] == b'b' {
let _do_something = black_box(0);
if buf.len() > 2 && buf[2] == b'c' {
panic!("Artificial bug triggered =)");
}
}
}
}

View File

@ -49,7 +49,7 @@ document-features = ["dep:document-features"]
std = [ std = [
"serde_json", "serde_json",
"serde_json/std", "serde_json/std",
"nix", "dep:nix",
"serde/std", "serde/std",
"bincode", "bincode",
"wait-timeout", "wait-timeout",
@ -107,6 +107,15 @@ regex = ["std", "dep:regex"]
## Enables deduplication based on `libcasr` for `StacktraceObserver` ## Enables deduplication based on `libcasr` for `StacktraceObserver`
casr = ["libcasr", "std", "regex"] casr = ["libcasr", "std", "regex"]
## Intel Processor Trace
intel_pt = [
"std",
"dep:libafl_intelpt",
"dep:libipt",
"dep:nix",
"dep:num_enum",
]
## Enables features for corpus minimization ## Enables features for corpus minimization
cmin = ["z3"] cmin = ["z3"]
@ -194,12 +203,14 @@ serde_json = { workspace = true, default-features = false, features = [
] } ] }
# clippy-suggested optimised byte counter # clippy-suggested optimised byte counter
bytecount = "0.6.8" bytecount = "0.6.8"
static_assertions = { workspace = true }
[dependencies] [dependencies]
libafl_bolts = { version = "0.13.2", path = "../libafl_bolts", default-features = false, features = [ libafl_bolts = { version = "0.13.2", path = "../libafl_bolts", default-features = false, features = [
"alloc", "alloc",
] } ] }
libafl_derive = { version = "0.13.2", path = "../libafl_derive", optional = true } libafl_derive = { version = "0.13.2", path = "../libafl_derive", optional = true }
libafl_intelpt = { path = "../libafl_intelpt", optional = true }
rustversion = { workspace = true } rustversion = { workspace = true }
tuple_list = { version = "0.1.3" } tuple_list = { version = "0.1.3" }
@ -220,7 +231,12 @@ typed-builder = { workspace = true, optional = true } # Implement the builder pa
serde_json = { workspace = true, optional = true, default-features = false, features = [ serde_json = { workspace = true, optional = true, default-features = false, features = [
"alloc", "alloc",
] } ] }
nix = { workspace = true, default-features = true, optional = true } nix = { workspace = true, optional = true, features = [
"signal",
"ptrace",
"personality",
"fs",
] }
regex = { workspace = true, optional = true } regex = { workspace = true, optional = true }
uuid = { workspace = true, optional = true, features = ["serde", "v4"] } uuid = { workspace = true, optional = true, features = ["serde", "v4"] }
libm = "0.2.8" libm = "0.2.8"
@ -272,6 +288,8 @@ serial_test = { workspace = true, optional = true, default-features = false, fea
document-features = { workspace = true, optional = true } document-features = { workspace = true, optional = true }
# Optional # Optional
clap = { workspace = true, optional = true } clap = { workspace = true, optional = true }
num_enum = { workspace = true, optional = true }
libipt = { workspace = true, optional = true }
[lints] [lints]
workspace = true workspace = true

View File

@ -7,28 +7,41 @@ use core::{
}; };
#[cfg(unix)] #[cfg(unix)]
use std::os::unix::ffi::OsStrExt; use std::os::unix::ffi::OsStrExt;
#[cfg(all(feature = "std", target_os = "linux"))]
use std::{
ffi::{CStr, CString},
os::fd::AsRawFd,
};
#[cfg(feature = "std")] #[cfg(feature = "std")]
use std::process::Child;
use std::{ use std::{
ffi::{OsStr, OsString}, ffi::{OsStr, OsString},
io::{Read, Write}, io::{Read, Write},
path::{Path, PathBuf}, path::{Path, PathBuf},
process::Child,
process::{Command, Stdio}, process::{Command, Stdio},
time::Duration, time::Duration,
}; };
#[cfg(all(feature = "std", target_os = "linux"))]
use libafl_bolts::core_affinity::CoreId;
use libafl_bolts::{ use libafl_bolts::{
fs::{get_unique_std_input_file, InputFile}, fs::{get_unique_std_input_file, InputFile},
tuples::{Handle, MatchName, RefIndexable}, tuples::{Handle, MatchName, RefIndexable},
AsSlice, AsSlice,
}; };
#[cfg(all(feature = "std", target_os = "linux"))]
use libc::STDIN_FILENO;
#[cfg(all(feature = "std", target_os = "linux"))]
use nix::unistd::Pid;
#[cfg(all(feature = "std", target_os = "linux"))]
use typed_builder::TypedBuilder;
use super::HasTimeout; use super::HasTimeout;
#[cfg(all(feature = "std", unix))] #[cfg(all(feature = "std", unix))]
use crate::executors::{Executor, ExitKind}; use crate::executors::{Executor, ExitKind};
use crate::{ use crate::{
corpus::Corpus, corpus::Corpus,
executors::HasObservers, executors::{hooks::ExecutorHooksTuple, HasObservers},
inputs::{HasTargetBytes, UsesInput}, inputs::{HasTargetBytes, UsesInput},
observers::{ObserversTuple, StdErrObserver, StdOutObserver}, observers::{ObserversTuple, StdErrObserver, StdOutObserver},
state::{HasCorpus, HasExecutions, State, UsesState}, state::{HasCorpus, HasExecutions, State, UsesState},
@ -40,7 +53,7 @@ use crate::{inputs::Input, Error};
/// How to deliver input to an external program /// How to deliver input to an external program
/// `StdIn`: The target reads from stdin /// `StdIn`: The target reads from stdin
/// `File`: The target reads from the specified [`InputFile`] /// `File`: The target reads from the specified [`InputFile`]
#[derive(Debug, Clone, PartialEq, Eq)] #[derive(Debug, Clone, PartialEq, Eq, Default)]
pub enum InputLocation { pub enum InputLocation {
/// Mutate a commandline argument to deliver an input /// Mutate a commandline argument to deliver an input
Arg { Arg {
@ -48,6 +61,7 @@ pub enum InputLocation {
argnum: usize, argnum: usize,
}, },
/// Deliver input via `StdIn` /// Deliver input via `StdIn`
#[default]
StdIn, StdIn,
/// Deliver the input via the specified [`InputFile`] /// Deliver the input via the specified [`InputFile`]
/// You can use specify [`InputFile::create(INPUTFILE_STD)`] to use a default filename. /// You can use specify [`InputFile::create(INPUTFILE_STD)`] to use a default filename.
@ -158,16 +172,116 @@ where
} }
} }
/// A `CommandExecutor` is a wrapper around [`std::process::Command`] to execute a target as a child process. /// Linux specific [`CommandConfigurator`] that leverages `ptrace`
///
/// This configurator was primarly developed to be used in conjunction with
/// [`crate::executors::hooks::intel_pt::IntelPTHook`]
#[cfg(all(feature = "std", target_os = "linux"))]
#[derive(Debug, Clone, PartialEq, Eq, TypedBuilder)]
pub struct PTraceCommandConfigurator {
#[builder(setter(into))]
path: CString,
#[builder(default)]
args: Vec<CString>,
#[builder(default)]
env: Vec<CString>,
#[builder(default)]
input_location: InputLocation,
#[builder(default, setter(strip_option))]
cpu: Option<CoreId>,
#[builder(default = 5 * 60, setter(transform = |t: Duration| t.as_secs() as u32))]
timeout: u32,
}
#[cfg(all(feature = "std", target_os = "linux"))]
impl<I> CommandConfigurator<I, Pid> for PTraceCommandConfigurator
where
I: HasTargetBytes,
{
fn spawn_child(&mut self, input: &I) -> Result<Pid, Error> {
use nix::{
sys::{
personality, ptrace,
signal::{raise, Signal},
},
unistd::{alarm, dup2, execve, fork, pipe, write, ForkResult},
};
match unsafe { fork() } {
Ok(ForkResult::Parent { child }) => Ok(child),
Ok(ForkResult::Child) => {
ptrace::traceme().unwrap();
if let Some(c) = self.cpu {
c.set_affinity_forced().unwrap();
}
// Disable Address Space Layout Randomization (ASLR) for consistent memory
// addresses between executions
let pers = personality::get().unwrap();
personality::set(pers | personality::Persona::ADDR_NO_RANDOMIZE).unwrap();
match &mut self.input_location {
InputLocation::Arg { argnum } => {
// self.args[argnum] will be overwritten if already present.
assert!(
*argnum <= self.args.len(),
"If you want to fuzz arg {argnum}, you have to specify the other {argnum} (static) args."
);
let terminated_input = [&input.target_bytes() as &[u8], &[0]].concat();
let cstring_input =
CString::from(CStr::from_bytes_until_nul(&terminated_input).unwrap());
if *argnum == self.args.len() {
self.args.push(cstring_input);
} else {
self.args[*argnum] = cstring_input;
}
}
InputLocation::StdIn => {
let (pipe_read, pipe_write) = pipe().unwrap();
write(pipe_write, &input.target_bytes()).unwrap();
dup2(pipe_read.as_raw_fd(), STDIN_FILENO).unwrap();
}
InputLocation::File { out_file } => {
out_file.write_buf(input.target_bytes().as_slice()).unwrap();
}
}
// After this STOP, the process is traced with PTrace (no hooks yet)
raise(Signal::SIGSTOP).unwrap();
alarm::set(self.timeout);
// Just before this returns, hooks pre_execs are called
execve(&self.path, &self.args, &self.env).unwrap();
unreachable!("execve returns only on error and its result is unwrapped");
}
Err(e) => Err(Error::unknown(format!("Fork failed: {e}"))),
}
}
fn exec_timeout(&self) -> Duration {
Duration::from_secs(u64::from(self.timeout))
}
/// Use [`PTraceCommandConfigurator::builder().timeout`] instead
fn exec_timeout_mut(&mut self) -> &mut Duration {
unimplemented!("Use [`PTraceCommandConfigurator::builder().timeout`] instead")
}
}
/// A `CommandExecutor` is a wrapper around [`Command`] to execute a target as a child process.
/// ///
/// Construct a `CommandExecutor` by implementing [`CommandConfigurator`] for a type of your choice and calling [`CommandConfigurator::into_executor`] on it. /// Construct a `CommandExecutor` by implementing [`CommandConfigurator`] for a type of your choice and calling [`CommandConfigurator::into_executor`] on it.
/// Instead, you can use [`CommandExecutor::builder()`] to construct a [`CommandExecutor`] backed by a [`StdCommandConfigurator`]. /// Instead, you can use [`CommandExecutor::builder()`] to construct a [`CommandExecutor`] backed by a [`StdCommandConfigurator`].
pub struct CommandExecutor<OT, S, T> { pub struct CommandExecutor<OT, S, T, HT = (), C = Child> {
/// The wrapped command configurer /// The wrapped command configurer
configurer: T, configurer: T,
/// The observers used by this executor /// The observers used by this executor
observers: OT, observers: OT,
hooks: HT,
phantom: PhantomData<S>, phantom: PhantomData<S>,
phantom_child: PhantomData<C>,
} }
impl CommandExecutor<(), (), ()> { impl CommandExecutor<(), (), ()> {
@ -179,7 +293,7 @@ impl CommandExecutor<(), (), ()> {
/// `arg`, `args`, `env`, and so on. /// `arg`, `args`, `env`, and so on.
/// ///
/// By default, input is read from stdin, unless you specify a different location using /// By default, input is read from stdin, unless you specify a different location using
/// * `arg_input_arg` for input delivered _as_ an command line argument /// * `arg_input_arg` for input delivered _as_ a command line argument
/// * `arg_input_file` for input via a file of a specific name /// * `arg_input_file` for input via a file of a specific name
/// * `arg_input_file_std` for a file with default name (at the right location in the arguments) /// * `arg_input_file_std` for a file with default name (at the right location in the arguments)
#[must_use] #[must_use]
@ -188,20 +302,22 @@ impl CommandExecutor<(), (), ()> {
} }
} }
impl<OT, S, T> Debug for CommandExecutor<OT, S, T> impl<OT, S, T, HT, C> Debug for CommandExecutor<OT, S, T, HT, C>
where where
T: Debug, T: Debug,
OT: Debug, OT: Debug,
HT: Debug,
{ {
fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result { fn fmt(&self, f: &mut Formatter<'_>) -> fmt::Result {
f.debug_struct("CommandExecutor") f.debug_struct("CommandExecutor")
.field("inner", &self.configurer) .field("inner", &self.configurer)
.field("observers", &self.observers) .field("observers", &self.observers)
.field("hooks", &self.hooks)
.finish() .finish()
} }
} }
impl<OT, S, T> CommandExecutor<OT, S, T> impl<OT, S, T, HT, C> CommandExecutor<OT, S, T, HT, C>
where where
T: Debug, T: Debug,
OT: Debug, OT: Debug,
@ -317,14 +433,94 @@ where
} }
} }
impl<OT, S, T> UsesState for CommandExecutor<OT, S, T> #[cfg(all(feature = "std", target_os = "linux"))]
impl<EM, OT, S, T, Z, HT> Executor<EM, Z> for CommandExecutor<OT, S, T, HT, Pid>
where
EM: UsesState<State = S>,
S: State + HasExecutions + UsesInput,
T: CommandConfigurator<S::Input, Pid> + Debug,
OT: Debug + MatchName + ObserversTuple<S::Input, S>,
Z: UsesState<State = S>,
HT: ExecutorHooksTuple<S>,
{
/// Linux specific low level implementation, to directly handle `fork`, `exec` and use linux
/// `ptrace`
///
/// Hooks' `pre_exec` and observers' `pre_exec_child` are called with the child process stopped
/// just before the `exec` return (after forking).
fn run_target(
&mut self,
_fuzzer: &mut Z,
state: &mut Self::State,
_mgr: &mut EM,
input: &Self::Input,
) -> Result<ExitKind, Error> {
use nix::sys::{
ptrace,
signal::Signal,
wait::{
waitpid, WaitPidFlag,
WaitStatus::{Exited, PtraceEvent, Signaled, Stopped},
},
};
*state.executions_mut() += 1;
let child = self.configurer.spawn_child(input)?;
let wait_status = waitpid(child, Some(WaitPidFlag::WUNTRACED))?;
if !matches!(wait_status, Stopped(c, Signal::SIGSTOP) if c == child) {
return Err(Error::unknown("Unexpected state of child process"));
}
ptrace::setoptions(child, ptrace::Options::PTRACE_O_TRACEEXEC)?;
ptrace::cont(child, None)?;
let wait_status = waitpid(child, None)?;
if !matches!(wait_status, PtraceEvent(c, Signal::SIGTRAP, e)
if c == child && e == (ptrace::Event::PTRACE_EVENT_EXEC as i32)
) {
return Err(Error::unknown("Unexpected state of child process"));
}
self.observers.pre_exec_child_all(state, input)?;
if *state.executions() == 1 {
self.hooks.init_all::<Self>(state);
}
self.hooks.pre_exec_all(state, input);
ptrace::detach(child, None)?;
let res = match waitpid(child, None)? {
Exited(pid, 0) if pid == child => ExitKind::Ok,
Exited(pid, _) if pid == child => ExitKind::Crash,
Signaled(pid, Signal::SIGALRM, _has_coredump) if pid == child => ExitKind::Timeout,
Signaled(pid, Signal::SIGABRT, _has_coredump) if pid == child => ExitKind::Crash,
Signaled(pid, Signal::SIGKILL, _has_coredump) if pid == child => ExitKind::Oom,
Stopped(pid, Signal::SIGALRM) if pid == child => ExitKind::Timeout,
Stopped(pid, Signal::SIGABRT) if pid == child => ExitKind::Crash,
Stopped(pid, Signal::SIGKILL) if pid == child => ExitKind::Oom,
s => {
// TODO other cases?
return Err(Error::unsupported(
format!("Target program returned an unexpected state when waiting on it. {s:?} (waiting for pid {child})")
));
}
};
self.hooks.post_exec_all(state, input);
self.observers.post_exec_child_all(state, input, &res)?;
Ok(res)
}
}
impl<OT, S, T, HT, C> UsesState for CommandExecutor<OT, S, T, HT, C>
where where
S: State, S: State,
{ {
type State = S; type State = S;
} }
impl<OT, S, T> HasObservers for CommandExecutor<OT, S, T> impl<OT, S, T, HT, C> HasObservers for CommandExecutor<OT, S, T, HT, C>
where where
S: State, S: State,
T: Debug, T: Debug,
@ -569,7 +765,7 @@ impl CommandExecutorBuilder {
} }
} }
/// A `CommandConfigurator` takes care of creating and spawning a [`std::process::Command`] for the [`CommandExecutor`]. /// A `CommandConfigurator` takes care of creating and spawning a [`Command`] for the [`CommandExecutor`].
/// # Example /// # Example
#[cfg_attr(all(feature = "std", unix), doc = " ```")] #[cfg_attr(all(feature = "std", unix), doc = " ```")]
#[cfg_attr(not(all(feature = "std", unix)), doc = " ```ignore")] #[cfg_attr(not(all(feature = "std", unix)), doc = " ```ignore")]
@ -614,7 +810,7 @@ impl CommandExecutorBuilder {
/// } /// }
/// ``` /// ```
#[cfg(all(feature = "std", any(unix, doc)))] #[cfg(all(feature = "std", any(unix, doc)))]
pub trait CommandConfigurator<I>: Sized { pub trait CommandConfigurator<I, C = Child>: Sized {
/// Get the stdout /// Get the stdout
fn stdout_observer(&self) -> Option<Handle<StdOutObserver>> { fn stdout_observer(&self) -> Option<Handle<StdOutObserver>> {
None None
@ -625,7 +821,7 @@ pub trait CommandConfigurator<I>: Sized {
} }
/// Spawns a new process with the given configuration. /// Spawns a new process with the given configuration.
fn spawn_child(&mut self, input: &I) -> Result<Child, Error>; fn spawn_child(&mut self, input: &I) -> Result<C, Error>;
/// Provides timeout duration for execution of the child process. /// Provides timeout duration for execution of the child process.
fn exec_timeout(&self) -> Duration; fn exec_timeout(&self) -> Duration;
@ -633,14 +829,36 @@ pub trait CommandConfigurator<I>: Sized {
fn exec_timeout_mut(&mut self) -> &mut Duration; fn exec_timeout_mut(&mut self) -> &mut Duration;
/// Create an `Executor` from this `CommandConfigurator`. /// Create an `Executor` from this `CommandConfigurator`.
fn into_executor<OT, S>(self, observers: OT) -> CommandExecutor<OT, S, Self> fn into_executor<OT, S>(self, observers: OT) -> CommandExecutor<OT, S, Self, (), C>
where where
OT: MatchName, OT: MatchName,
{ {
CommandExecutor { CommandExecutor {
configurer: self, configurer: self,
observers, observers,
hooks: (),
phantom: PhantomData, phantom: PhantomData,
phantom_child: PhantomData,
}
}
/// Create an `Executor` with hooks from this `CommandConfigurator`.
fn into_executor_with_hooks<OT, S, HT>(
self,
observers: OT,
hooks: HT,
) -> CommandExecutor<OT, S, Self, HT, C>
where
OT: MatchName,
HT: ExecutorHooksTuple<S>,
S: UsesInput<Input = I>,
{
CommandExecutor {
configurer: self,
observers,
hooks,
phantom: PhantomData,
phantom_child: PhantomData,
} }
} }
} }

View File

@ -0,0 +1,106 @@
use core::fmt::Debug;
use std::{
ptr::slice_from_raw_parts_mut,
string::{String, ToString},
};
use libafl_intelpt::{error_from_pt_error, IntelPT};
use libipt::{Asid, Image, SectionCache};
use num_traits::SaturatingAdd;
use serde::Serialize;
use typed_builder::TypedBuilder;
use crate::{
executors::{hooks::ExecutorHook, HasObservers},
inputs::UsesInput,
Error,
};
/// Info of a binary's section that can be used during `Intel PT` traces decoding
#[derive(Debug, Clone, PartialEq, Eq)]
pub struct Section {
/// Path of the binary
pub file_path: String,
/// Offset of the section in the file
pub file_offset: u64,
/// Size of the section
pub size: u64,
/// Start virtual address of the section once loaded in memory
pub virtual_address: u64,
}
/// Hook to enable Intel Processor Trace (PT) tracing
#[derive(TypedBuilder)]
pub struct IntelPTHook<T> {
#[builder(default = IntelPT::builder().build().unwrap())]
intel_pt: IntelPT,
#[builder(setter(transform = |sections: &[Section]| sections_to_image(sections).unwrap()))]
image: (Image<'static>, SectionCache<'static>),
map_ptr: *mut T,
map_len: usize,
}
//fixme: just derive(Debug) once https://github.com/sum-catnip/libipt-rs/pull/4 will be on crates.io
impl<T> Debug for IntelPTHook<T> {
fn fmt(&self, f: &mut core::fmt::Formatter<'_>) -> Result<(), core::fmt::Error> {
f.debug_struct("IntelPTHook")
.field("intel_pt", &self.intel_pt)
.field("map_ptr", &self.map_ptr)
.field("map_len", &self.map_len)
.finish()
}
}
impl<S, T> ExecutorHook<S> for IntelPTHook<T>
where
S: UsesInput + Serialize,
T: SaturatingAdd + From<u8> + Debug,
{
fn init<E: HasObservers>(&mut self, _state: &mut S) {}
fn pre_exec(&mut self, _state: &mut S, _input: &S::Input) {
self.intel_pt.enable_tracing().unwrap();
}
fn post_exec(&mut self, _state: &mut S, _input: &S::Input) {
self.intel_pt.disable_tracing().unwrap();
let slice = unsafe { &mut *slice_from_raw_parts_mut(self.map_ptr, self.map_len) };
let _ = self
.intel_pt
.decode_traces_into_map(&mut self.image.0, slice)
.inspect_err(|e| log::warn!("Intel PT trace decoding failed: {e}"));
}
}
// It would be nice to have this as a `TryFrom<IntoIter<Section>>`, but Rust's orphan rule doesn't
// like this (and `TryFromIter` is not a thing atm)
fn sections_to_image(
sections: &[Section],
) -> Result<(Image<'static>, SectionCache<'static>), Error> {
let mut image_cache = SectionCache::new(Some("image_cache")).map_err(error_from_pt_error)?;
let mut image = Image::new(Some("image")).map_err(error_from_pt_error)?;
for s in sections {
let isid = image_cache.add_file(&s.file_path, s.file_offset, s.size, s.virtual_address);
if let Err(e) = isid {
log::warn!(
"Error while caching {} {} - skipped",
s.file_path,
e.to_string()
);
continue;
}
if let Err(e) = image.add_cached(&mut image_cache, isid.unwrap(), Asid::default()) {
log::warn!(
"Error while adding cache to image {} {} - skipped",
s.file_path,
e.to_string()
);
continue;
}
}
Ok((image, image_cache))
}

View File

@ -22,6 +22,10 @@ pub mod inprocess;
#[cfg(feature = "std")] #[cfg(feature = "std")]
pub mod timer; pub mod timer;
/// Intel Processor Trace (PT)
#[cfg(all(feature = "intel_pt", target_os = "linux"))]
pub mod intel_pt;
/// The hook that runs before and after the executor runs the target /// The hook that runs before and after the executor runs the target
pub trait ExecutorHook<S> pub trait ExecutorHook<S>
where where

View File

@ -19,7 +19,6 @@ categories = [
"os", "os",
"no-std", "no-std",
] ]
rust-version = "1.70.0"
[package.metadata.docs.rs] [package.metadata.docs.rs]
features = ["document-features"] features = ["document-features"]
@ -121,7 +120,7 @@ rustversion = { workspace = true }
[dependencies] [dependencies]
libafl_derive = { version = "0.13.2", optional = true, path = "../libafl_derive" } libafl_derive = { version = "0.13.2", optional = true, path = "../libafl_derive" }
static_assertions = "1.1.0" static_assertions = { workspace = true }
tuple_list = { version = "0.1.3" } tuple_list = { version = "0.1.3" }
hashbrown = { workspace = true, features = [ hashbrown = { workspace = true, features = [

View File

@ -1294,7 +1294,7 @@ where
log::debug!( log::debug!(
"[{} - {:#x}] Send message with id {}", "[{} - {:#x}] Send message with id {}",
self.id.0, self.id.0,
self as *const Self as u64, ptr::from_ref::<Self>(self) as u64,
mid mid
); );
@ -1710,7 +1710,7 @@ where
log::debug!( log::debug!(
"[{} - {:#x}] Received message with ID {}...", "[{} - {:#x}] Received message with ID {}...",
self.id.0, self.id.0,
self as *const Self as u64, ptr::from_ref::<Self>(self) as u64,
(*msg).message_id.0 (*msg).message_id.0
); );

View File

@ -195,11 +195,7 @@ where
let shmem_content = self.content_mut(); let shmem_content = self.content_mut();
unsafe { unsafe {
ptr::copy_nonoverlapping( ptr::copy_nonoverlapping(EXITING_MAGIC.as_ptr(), shmem_content.buf.as_mut_ptr(), len);
EXITING_MAGIC as *const u8,
shmem_content.buf.as_mut_ptr(),
len,
);
} }
shmem_content.buf_len = EXITING_MAGIC.len(); shmem_content.buf_len = EXITING_MAGIC.len();
} }

42
libafl_intelpt/Cargo.toml Normal file
View File

@ -0,0 +1,42 @@
[package]
name = "libafl_intelpt"
version.workspace = true
authors = ["Marco Cavenati <cavenatimarco@gmail.com>"]
description = "Intel Processor Trace wrapper for libafl"
repository = "https://github.com/AFLplusplus/LibAFL/"
edition = "2021"
license.workspace = true
readme = "./README.md"
keywords = ["fuzzing", "testing", "security", "intelpt"]
categories = ["development-tools::testing", "no-std"]
[features]
default = ["std", "libipt"]
std = ["libafl_bolts/std"]
libipt = ["std", "dep:libipt"]
[dev-dependencies]
static_assertions = { workspace = true }
[target.'cfg(target_os = "linux" )'.dev-dependencies]
nix = { workspace = true }
proc-maps = "0.4.0"
[dependencies]
#arbitrary-int = { version = "1.2.7" }
#bitbybit = { version = "1.3.2" }
libafl_bolts = { path = "../libafl_bolts", default-features = false }
libc = { workspace = true }
libipt = { workspace = true, optional = true }
log = { workspace = true }
num_enum = { workspace = true, default-features = false }
num-traits = { workspace = true, default-features = false }
raw-cpuid = { version = "11.1.0" }
[target.'cfg(target_os = "linux" )'.dependencies]
caps = { version = "0.5.5" }
perf-event-open-sys = { version = "4.0.0" }
[lints]
workspace = true

5
libafl_intelpt/README.md Normal file
View File

@ -0,0 +1,5 @@
# Intel Processor Trace (PT) low level code
This module is a wrapper around the IntelPT kernel driver, exposing functionalities specifically crafted for libafl.
At the moment only linux hosts are supported.

1030
libafl_intelpt/src/lib.rs Normal file

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,95 @@
#![cfg(feature = "std")]
#![cfg(feature = "libipt")]
#![cfg(target_os = "linux")]
use std::{arch::asm, process};
use libafl_intelpt::{availability, IntelPT};
use libipt::Image;
use nix::{
sys::{
signal::{kill, raise, Signal},
wait::{waitpid, WaitPidFlag},
},
unistd::{fork, ForkResult},
};
use proc_maps::get_process_maps;
/// To run this test ensure that the executable has the required capabilities.
/// This can be achieved with the script `./run_integration_tests_linux_with_caps.sh`
#[test]
fn intel_pt_trace_fork() {
if let Err(reason) = availability() {
// Mark as `skipped` once this will be possible https://github.com/rust-lang/rust/issues/68007
println!("Intel PT is not available, skipping test. Reasons:");
println!("{reason}");
return;
}
let pid = match unsafe { fork() } {
Ok(ForkResult::Parent { child }) => child,
Ok(ForkResult::Child) => {
raise(Signal::SIGSTOP).expect("Failed to stop the process");
// This will generate a sequence of tnt packets containing 255 taken branches
unsafe {
let mut count = 0;
asm!(
"2:",
"add {0:r}, 1",
"cmp {0:r}, 255",
"jle 2b",
inout(reg) count,
options(nostack)
);
let _ = count;
}
process::exit(0);
}
Err(e) => panic!("Fork failed {e}"),
};
let pt_builder = IntelPT::builder().pid(Some(pid.as_raw()));
let mut pt = pt_builder.build().expect("Failed to create IntelPT");
pt.enable_tracing().expect("Failed to enable tracing");
waitpid(pid, Some(WaitPidFlag::WUNTRACED)).expect("Failed to wait for the child process");
let maps = get_process_maps(pid.into()).unwrap();
kill(pid, Signal::SIGCONT).expect("Failed to continue the process");
waitpid(pid, None).expect("Failed to wait for the child process");
pt.disable_tracing().expect("Failed to disable tracing");
let mut image = Image::new(Some("test_trace_pid")).unwrap();
for map in maps {
if map.is_exec() && map.filename().is_some() {
match image.add_file(
map.filename().unwrap().to_str().unwrap(),
map.offset as u64,
map.size() as u64,
None,
map.start() as u64,
) {
Err(e) => println!(
"Error adding mapping for {:?}: {:?}, skipping",
map.filename().unwrap(),
e
),
Ok(()) => println!(
"mapping for {:?} added successfully {:#x} - {:#x}",
map.filename().unwrap(),
map.start(),
map.start() + map.size()
),
}
}
}
let mut map = vec![0u16; 0x10_00];
pt.decode_traces_into_map(&mut image, &mut map).unwrap();
let assembly_jump_id = map.iter().position(|count| *count >= 254);
assert!(
assembly_jump_id.is_some(),
"Assembly jumps not found in traces"
);
}

View File

@ -0,0 +1,11 @@
#!/usr/bin/env bash
cargo test intel_pt_trace_fork --no-run
for test_bin in ../target/debug/deps/integration_tests_linux-*; do
if file "$test_bin" | grep -q "ELF"; then
sudo setcap cap_ipc_lock,cap_sys_ptrace,cap_sys_admin,cap_syslog=ep "$test_bin"
fi
done
cargo test intel_pt_trace_fork