Concolic Tracing (#160)

* add stub runtime that links with symcc common runtime code

* implement tracing runtime to generate message file

* move ShMemCursor to libafl proper

* qualify enum imports to make clippy happy

* fix warnings

* formatting

* update symcc submodule to point to AFL++ org repo

* fix naming of ShMemCursor and remove std requirement

* ensure runtime is named correctly after compilation

* add devcontainer files for easier development

(will be removed later)

* move rust nightly install into devcontainer.json

this makes it run after the container has been built

* dev container: install recommended packages

* switch to building rust runtime from SymCC cmake

* install corrosion in dev container for cmake-cargo integration

* add smoke test for symcc-runtime integration

* update symcc submodule

* add rustfmt to devcontainer

* properly mark the end of a constraint trace

Using a special "End" message

* small tool to dump constraints from a traced process

* extend smoke test to include parsing & printing of constraints

* update symcc submodule

* first draft of expression filters for concolic

* fix type in runtime method name

* update symcc submodule

* implement extensions to serdeany map:

* remove -> Option<T>
* insert_boxed(Box<T>) (avoids allocation if value is already boxed)

* implement std::io::Seek for ShMemCursor

* implement framing for in-memory traces

this allows to efficiently get the length of trace.
this is important for efficiently copying the trace out of the shared
memory region.

* fix for serdeany map

* fuzzer that associates concolic traces with test
case

* ensure runtime can handle 0-expressions

* move metadata, observer and feedback into separate files

* convert executor to command executor and move to separate file

* refactoring and streamlining

* move panic mode configuration to cmake script

* compile cmake from source, because debians version is too old.........

* use separate stage for tracing

* fix dockerfile

* move runtime into the workspace

using prior work on compilation flags from cmake

* actually make use of selective symbolication filter

* update to support latest symcc changes

* implement hitmap for concolic runtime

* clippy

* implement selective symbolization and coverage map for dump_constraints tool

* use concolic runtime coverage for concolic fuzzer feedback

* actually kill process on timeout

* be extra careful after killing process

* increase command executor busy wait to 5ms

* implement concolic tracing stage

* address naming issue

* implement floating point expression filter for runtime

* rename expression filters to be less verbose

* implement expression pruning

* implement ConcolicMutationalStage

* refactor command executor and remove busy loop

* implement generic command executor

* remove debug prints

* refactor + documentation

* refactor

* add stub runtime that links with symcc common runtime code

* implement tracing runtime to generate message file

* move ShMemCursor to libafl proper

* qualify enum imports to make clippy happy

* fix warnings

* formatting

* update symcc submodule to point to AFL++ org repo

* fix naming of ShMemCursor and remove std requirement

* ensure runtime is named correctly after compilation

* add devcontainer files for easier development

(will be removed later)

* move rust nightly install into devcontainer.json

this makes it run after the container has been built

* dev container: install recommended packages

* switch to building rust runtime from SymCC cmake

* install corrosion in dev container for cmake-cargo integration

* add smoke test for symcc-runtime integration

* update symcc submodule

* add rustfmt to devcontainer

* properly mark the end of a constraint trace

Using a special "End" message

* small tool to dump constraints from a traced process

* extend smoke test to include parsing & printing of constraints

* update symcc submodule

* first draft of expression filters for concolic

* fix type in runtime method name

* update symcc submodule

* implement extensions to serdeany map:

* remove -> Option<T>
* insert_boxed(Box<T>) (avoids allocation if value is already boxed)

* implement std::io::Seek for ShMemCursor

* implement framing for in-memory traces

this allows to efficiently get the length of trace.
this is important for efficiently copying the trace out of the shared
memory region.

* fix for serdeany map

* fuzzer that associates concolic traces with test
case

* ensure runtime can handle 0-expressions

* move metadata, observer and feedback into separate files

* convert executor to command executor and move to separate file

* refactoring and streamlining

* move panic mode configuration to cmake script

* compile cmake from source, because debians version is too old.........

* use separate stage for tracing

* fix dockerfile

* move runtime into the workspace

using prior work on compilation flags from cmake

* actually make use of selective symbolication filter

* update to support latest symcc changes

* implement hitmap for concolic runtime

* clippy

* implement selective symbolization and coverage map for dump_constraints tool

* use concolic runtime coverage for concolic fuzzer feedback

* actually kill process on timeout

* be extra careful after killing process

* increase command executor busy wait to 5ms

* implement concolic tracing stage

* address naming issue

* implement floating point expression filter for runtime

* rename expression filters to be less verbose

* implement expression pruning

* implement ConcolicMutationalStage

* refactor command executor and remove busy loop

* implement generic command executor

* remove debug prints

* refactor + documentation

* refactor

* fixed build, clippy

* no_std

* implement WithObservers executor as discussed

* add symqemu as a submodule

* fix symqemu submodule URL to be relative

* update the concolic runtime to match the new interface

* update the trace file header regularly to save constraints in case the program crashes

* add build dependencies for symqemu

* handle full mesage buffer properly

* better policy for updating trace header

* less aggregiously inefficient GC information serialization

* move concolic runtime hitmap count to filter

this is in preparation for the new runtime interface

* very WIP new runtime interface

* use more convenient types in rust runtime

* EmptyRuntime -> NopRuntime

* hide cpp_runtime and formatting

* implement tracing runtime using new runtime interface

* implement filters with new runtime interface

* use a local checkout for symcc_runtime

* make test runtime tracing

* use test_runtime in smoke test

* fix formatting

* make the clippy overlord happy?

* disable symcc build on everything but linux

* make more of symcc_runtime linux only

* fix linking symcc_runtime with C++ stdlib

* will clippy ever be happy?

* formatting

* don't export symcc runtime when compiling tests

* clippy...

* "don't export symcc runtime when compiling tests" for runtime crate as well

* clippy

* move command executor to LibAFL

* move concolic crate into LibAFL

* move concolic{metada,observer} into LibAFL

* move ConcolicFeedback into LibAFL

* move ConolicStage into LibAFL

* fix bug in symcc part of concolic runtime

* stb_image fuzzer with concolic as example fuzzer

* clean up basic_concolic_fuzzer

* clean up and document concolic example fuzzer

* formatting

* clippy

* remove basic_concolic_fuzzer (it is now part of the examples)

* remove the runtime crate in favor of symcc_runtime

* re-architect concolic smoke test and remove git submodules

* remove old submodule directories

* make coverage filter public

* focker docker build

* clippy

* clippy fixes

* fix ubuntu as well

* remove .gitmodules

* move concolic mutational stage into libafl behind feature flag

* script to install dependencies for concolic smoke test

* fix bug

* clippy

* add github action to run smoke test

* fix action

* ensure smoke test is run in correct directory

* remove devcontainer files

* address feedback

* clippy

* more clippy

* address more feedback

Co-authored-by: Dominik Maier <domenukk@gmail.com>
This commit is contained in:
julihoh 2021-08-05 13:22:00 +02:00 committed by GitHub
parent 4d50ba277a
commit 3d98d31712
No known key found for this signature in database
GPG Key ID: 4AEE18F83AFDEB23
49 changed files with 11327 additions and 13 deletions

View File

@ -65,6 +65,19 @@ jobs:
run: cargo test --all-features --doc
- name: Run clippy
run: ./scripts/clippy.sh
ubuntu-concolic:
runs-on: ubuntu-latest
steps:
- uses: actions/checkout@v2
- uses: actions-rs/toolchain@v1
with:
profile: minimal
toolchain: stable
- uses: Swatinem/rust-cache@v1
- name: Install smoke test deps
run: sudo ./libafl_concolic/test/smoke_test_ubuntu_deps.sh
- name: Run smoke test
run: ./libafl_concolic/test/smoke_test.sh
ubuntu-fuzzers:
runs-on: ubuntu-latest
steps:

0
.gitmodules vendored
View File

View File

@ -14,6 +14,9 @@ members = [
"libafl_qemu",
"libafl_sugar",
"libafl_tests",
"libafl_concolic/symcc_runtime",
"libafl_concolic/test/dump_constraints",
"libafl_concolic/test/runtime_test",
]
default-members = [
"libafl",

View File

@ -56,6 +56,15 @@ COPY scripts/dummy.rs libafl_targets/src/lib.rs
COPY libafl_tests/Cargo.toml libafl_tests/build.rs libafl_tests/
COPY scripts/dummy.rs libafl_tests/src/lib.rs
COPY libafl_concolic/test/dump_constraints/Cargo.toml libafl_concolic/test/dump_constraints/
COPY scripts/dummy.rs libafl_concolic/test/dump_constraints/src/lib.rs
COPY libafl_concolic/test/runtime_test/Cargo.toml libafl_concolic/test/runtime_test/
COPY scripts/dummy.rs libafl_concolic/test/runtime_test/src/lib.rs
COPY libafl_concolic/symcc_runtime/Cargo.toml libafl_concolic/symcc_runtime/build.rs libafl_concolic/symcc_runtime/
COPY scripts/dummy.rs libafl_concolic/symcc_runtime/src/lib.rs
RUN cargo build && cargo build --release
COPY scripts scripts

View File

@ -16,7 +16,7 @@ fn build_dep_check(tools: &[&str]) {
println!("Checking for build tool {}...", tool);
if let Ok(path) = which(tool) {
println!("Found build tool {}", path.to_str().unwrap())
println!("Found build tool {}", path.to_str().unwrap());
} else {
println!("ERROR: missing build tool {}", tool);
exit(1);

View File

@ -0,0 +1,21 @@
# Hybrid Fuzzing for stb_image
This folder contains an example hybrid fuzzer for stb_image using SymCC.
It is based on the stb_image fuzzer that is also part of the examples.
It has been tested on Linux only, as SymCC only works on linux.
The fuzzer itself is in the `fuzzer` directory and the concolic runtime lives in `runtime`.
## Build
To build this example, run `cargo build --release` in the `runtime` and `fuzzer` directories separately (and in that order).
This will build the fuzzer like it does in the stb_image case, but _additionally_ builds a version of the target that is instrumented with SymCC concolic instrumentation (`harness_symcc.c`).
This separate version also doesn't conform to LibFuzzer's interface, but rather is a simple program that has the same behaviour as the LibFuzzer version (`harness.c`), because the SymCC runtime expects targets it's environment to be destroyed after a single execution (ie. it doesn't clean up it's resources).
Building the separate concolic version of the target also requires a concolic runtime, which is part of the `runtime` folder.
The build script of the fuzzer will check that the runtime has been built, but triggering the build command needs to be done manually (ie. run `cargo build (--release)` the runtime folder before building the fuzzer).
The build script will also build SymCC.
Therefore, all build depencies for SymCC should be available beforehand.
## Run
The first time you run the binary (`target/release/libfuzzer_stb_image_concolic`), the broker will open a tcp port (currently on port `1337`), waiting for fuzzer clients to connect. This port is local and only used for the initial handshake. All further communication happens via shared map, to be independent of the kernel.

View File

@ -0,0 +1,2 @@
libpng-*
cur_input

View File

@ -0,0 +1,26 @@
[package]
name = "libfuzzer_stb_image_concolic"
version = "0.5.0"
authors = ["Andrea Fioraldi <andreafioraldi@gmail.com>", "Dominik Maier <domenukk@gmail.com>", "Julius Hohnerlein"]
edition = "2018"
build = "build.rs"
[features]
default = ["std"]
std = []
[profile.release]
lto = true
codegen-units = 1
opt-level = 3
debug = true
[dependencies]
libafl = { path = "../../../libafl/", features = ["concolic_mutation"] }
libafl_targets = { path = "../../../libafl_targets/", features = ["sancov_pcguard_edges", "sancov_cmplog", "libfuzzer"] }
[build-dependencies]
cc = { version = "1.0", features = ["parallel"] }
num_cpus = "1.0"
cmake = "0.1"
which = "4.1"

View File

@ -0,0 +1,139 @@
// build.rs
use std::{
env,
io::{stdout, Write},
path::{Path, PathBuf},
process::{exit, Command},
};
use which::which;
fn build_dep_check(tools: &[&str]) {
for tool in tools {
println!("Checking for build tool {}...", tool);
if let Ok(path) = which(tool) {
println!("Found build tool {}", path.to_str().unwrap());
} else {
println!("ERROR: missing build tool {}", tool);
exit(1);
};
}
}
fn main() {
let out_path = PathBuf::from(&env::var_os("OUT_DIR").unwrap());
println!("cargo:rerun-if-changed=harness.c");
build_dep_check(&["clang", "clang++"]);
// Enforce clang for its -fsanitize-coverage support.
std::env::set_var("CC", "clang");
std::env::set_var("CXX", "clang++");
cc::Build::new()
// Use sanitizer coverage to track the edges in the PUT
.flag("-fsanitize-coverage=trace-pc-guard,trace-cmp")
// Take advantage of LTO (needs lld-link set in your cargo config)
//.flag("-flto=thin")
.flag("-Wno-sign-compare")
.file("./harness.c")
.compile("harness");
println!(
"cargo:rustc-link-search=native={}",
&out_path.to_string_lossy()
);
let symcc_dir = clone_and_build_symcc(&out_path);
let runtime_dir = std::env::current_dir()
.unwrap()
.join("..")
.join("runtime")
.join("target")
.join(std::env::var("PROFILE").unwrap());
if !runtime_dir.join("libSymRuntime.so").exists() {
println!("cargo:warning=Runtime not found. Build it first.");
exit(1);
}
// SymCC.
std::env::set_var("CC", symcc_dir.join("symcc"));
std::env::set_var("CXX", symcc_dir.join("sym++"));
std::env::set_var("SYMCC_RUNTIME_DIR", runtime_dir);
println!("cargo:rerun-if-changed=harness_symcc.c");
let output = cc::Build::new()
.flag("-Wno-sign-compare")
.cargo_metadata(false)
.get_compiler()
.to_command()
.arg("./harness_symcc.c")
.args(["-o", "target_symcc.out"])
.arg("-lm")
.output()
.expect("failed to execute symcc");
if !output.status.success() {
println!("cargo:warning=Building the target with SymCC failed");
let mut stdout = stdout();
stdout
.write_all(&output.stderr)
.expect("failed to write cc error message to stdout");
exit(1);
}
println!("cargo:rerun-if-changed=build.rs");
println!("cargo:rerun-if-changed=harness.c");
println!("cargo:rerun-if-changed=harness_symcc.c");
}
const SYMCC_REPO_URL: &str = "https://github.com/AFLplusplus/symcc.git";
const SYMCC_REPO_COMMIT: &str = "45cde0269ae22aef4cca2e1fb98c3b24f7bb2984";
fn clone_and_build_symcc(out_path: &Path) -> PathBuf {
let repo_dir = out_path.join("libafl_symcc_src");
if !repo_dir.exists() {
build_dep_check(&["git"]);
let mut cmd = Command::new("git");
cmd.arg("clone").arg(SYMCC_REPO_URL).arg(&repo_dir);
let output = cmd.output().expect("failed to execute git clone");
if output.status.success() {
let mut cmd = Command::new("git");
cmd.arg("checkout")
.arg(SYMCC_REPO_COMMIT)
.current_dir(&repo_dir);
let output = cmd.output().expect("failed to execute git checkout");
if output.status.success() {
} else {
println!("failed to checkout symcc git repository commit:");
let mut stdout = stdout();
stdout
.write_all(&output.stderr)
.expect("failed to write git error message to stdout");
exit(1)
}
} else {
println!("failed to clone symcc git repository:");
let mut stdout = stdout();
stdout
.write_all(&output.stderr)
.expect("failed to write git error message to stdout");
exit(1)
}
}
build_dep_check(&["cmake"]);
use cmake::Config;
Config::new(repo_dir)
.define("Z3_TRUST_SYSTEM_VERSION", "ON")
.no_build_target(true)
.build()
.join("build")
}

Binary file not shown.

After

Width:  |  Height:  |  Size: 218 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 376 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 228 B

Binary file not shown.

After

Width:  |  Height:  |  Size: 427 B

View File

@ -0,0 +1,28 @@
#include <stdint.h>
#include <assert.h>
#define STBI_ASSERT(x)
#define STBI_NO_SIMD
#define STBI_NO_LINEAR
#define STBI_NO_STDIO
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"
int LLVMFuzzerTestOneInput(const uint8_t* data, size_t size)
{
int x, y, channels;
if(!stbi_info_from_memory(data, size, &x, &y, &channels)) return 0;
/* exit if the image is larger than ~80MB */
if(y && x > (80000000 / 4) / y) return 0;
unsigned char *img = stbi_load_from_memory(data, size, &x, &y, &channels, 4);
free(img);
// if (x > 10000) free(img); // free crash
return 0;
}

View File

@ -0,0 +1,39 @@
#include <stdint.h>
#include <stdio.h>
#include <stdlib.h>
// This is the same as harness.c. Instead of the LibFuzzer interface, this
// program simply has a main and takes the input file path as argument.
#define STBI_ASSERT(x)
#define STBI_NO_SIMD
#define STBI_NO_LINEAR
#define STB_IMAGE_IMPLEMENTATION
#include "stb_image.h"
int main(int argc, char **argv) {
if (argc < 2) {
return -1;
}
char *file_path = argv[1];
int x, y, channels;
if (!stbi_info(file_path, &x, &y, &channels))
return 0;
/* exit if the image is larger than ~80MB */
if (y && x > (80000000 / 4) / y)
return 0;
unsigned char *img = stbi_load(file_path, &x, &y, &channels, 4);
free(img);
// if (x > 10000) free(img); // free crash
return 0;
}

View File

@ -0,0 +1,246 @@
//! A libfuzzer-like fuzzer with llmp-multithreading support and restarts
//! The example harness is built for `stb_image`.
use std::{env, path::PathBuf};
use libafl::{
bolts::{
current_nanos,
rands::StdRand,
shmem::{ShMem, ShMemProvider, StdShMemProvider},
tuples::{tuple_list, Named},
},
corpus::{
Corpus, InMemoryCorpus, IndexesLenTimeMinimizerCorpusScheduler, OnDiskCorpus,
QueueCorpusScheduler,
},
events::setup_restarting_mgr_std,
executors::{
command::CommandConfigurator, inprocess::InProcessExecutor, ExitKind, ShadowExecutor,
},
feedback_or,
feedbacks::{CrashFeedback, MapFeedbackState, MaxMapFeedback, TimeFeedback},
fuzzer::{Fuzzer, StdFuzzer},
inputs::{BytesInput, HasTargetBytes, Input},
mutators::{
scheduled::{havoc_mutations, StdScheduledMutator},
token_mutations::I2SRandReplace,
},
observers::{
concolic::{
serialization_format::shared_memory::{DEFAULT_ENV_NAME, DEFAULT_SIZE},
ConcolicObserver,
},
StdMapObserver, TimeObserver,
},
stages::{
ConcolicTracingStage, ShadowTracingStage, SimpleConcolicMutationalStage,
StdMutationalStage, TracingStage,
},
state::{HasCorpus, StdState},
stats::MultiStats,
Error,
};
use libafl_targets::{
libfuzzer_initialize, libfuzzer_test_one_input, CmpLogObserver, CMPLOG_MAP, EDGES_MAP,
MAX_EDGES_NUM,
};
pub fn main() {
// Registry the metadata types used in this fuzzer
// Needed only on no_std
//RegistryBuilder::register::<Tokens>();
println!(
"Workdir: {:?}",
env::current_dir().unwrap().to_string_lossy().to_string()
);
fuzz(
&[PathBuf::from("./corpus")],
PathBuf::from("./crashes"),
1337,
)
.expect("An error occurred while fuzzing");
}
/// The actual fuzzer
fn fuzz(corpus_dirs: &[PathBuf], objective_dir: PathBuf, broker_port: u16) -> Result<(), Error> {
// 'While the stats are state, they are usually used in the broker - which is likely never restarted
let stats = MultiStats::new(|s| println!("{}", s));
// The restarting state will spawn the same process again as child, then restarted it each time it crashes.
let (state, mut restarting_mgr) =
match setup_restarting_mgr_std(stats, broker_port, "default".into()) {
Ok(res) => res,
Err(err) => match err {
Error::ShuttingDown => {
return Ok(());
}
_ => {
panic!("Failed to setup the restarter: {}", err);
}
},
};
// Create an observation channel using the coverage map
// We don't use the hitcounts (see the Cargo.toml, we use pcguard_edges)
let edges = unsafe { &mut EDGES_MAP[0..MAX_EDGES_NUM] };
let edges_observer = StdMapObserver::new("edges", edges);
// Create an observation channel to keep track of the execution time
let time_observer = TimeObserver::new("time");
let cmplog = unsafe { &mut CMPLOG_MAP };
let cmplog_observer = CmpLogObserver::new("cmplog", cmplog, true);
// The state of the edges feedback.
let feedback_state = MapFeedbackState::with_observer(&edges_observer);
// Feedback to rate the interestingness of an input
// This one is composed by two Feedbacks in OR
let feedback = feedback_or!(
// New maximization map feedback linked to the edges observer and the feedback state
MaxMapFeedback::new_tracking(&feedback_state, &edges_observer, true, false),
// Time feedback, this one does not need a feedback state
TimeFeedback::new_with_observer(&time_observer)
);
// A feedback to choose if an input is a solution or not
let objective = CrashFeedback::new();
// If not restarting, create a State from scratch
let mut state = state.unwrap_or_else(|| {
StdState::new(
// RNG
StdRand::with_seed(current_nanos()),
// Corpus that will be evolved, we keep it in memory for performance
InMemoryCorpus::new(),
// Corpus in which we store solutions (crashes in this example),
// on disk so the user can get them after stopping the fuzzer
OnDiskCorpus::new(objective_dir).unwrap(),
// States of the feedbacks.
// They are the data related to the feedbacks that you want to persist in the State.
tuple_list!(feedback_state),
)
});
println!("We're a client, let's fuzz :)");
// A minimization+queue policy to get testcasess from the corpus
let scheduler = IndexesLenTimeMinimizerCorpusScheduler::new(QueueCorpusScheduler::new());
// A fuzzer with feedbacks and a corpus scheduler
let mut fuzzer = StdFuzzer::new(scheduler, feedback, objective);
// The wrapped harness function, calling out to the LLVM-style harness
let mut harness = |input: &BytesInput| {
let target = input.target_bytes();
let buf = target.as_slice();
libfuzzer_test_one_input(buf);
ExitKind::Ok
};
// Create the executor for an in-process function with just one observer for edge coverage
let mut executor = ShadowExecutor::new(
InProcessExecutor::new(
&mut harness,
tuple_list!(edges_observer, time_observer),
&mut fuzzer,
&mut state,
&mut restarting_mgr,
)?,
tuple_list!(cmplog_observer),
);
// The actual target run starts here.
// Call LLVMFUzzerInitialize() if present.
let args: Vec<String> = env::args().collect();
if libfuzzer_initialize(&args) == -1 {
println!("Warning: LLVMFuzzerInitialize failed with -1")
}
// In case the corpus is empty (on first run), reset
if state.corpus().count() < 1 {
state
.load_initial_inputs(
&mut fuzzer,
&mut executor,
&mut restarting_mgr,
&corpus_dirs,
)
.unwrap_or_else(|_| panic!("Failed to load initial corpus at {:?}", &corpus_dirs));
println!("We imported {} inputs from disk.", state.corpus().count());
}
// Setup a tracing stage in which we log comparisons
let tracing = ShadowTracingStage::new(&mut executor);
// Setup a randomic Input2State stage
let i2s = StdMutationalStage::new(StdScheduledMutator::new(tuple_list!(I2SRandReplace::new())));
// Setup a basic mutator
let mutator = StdScheduledMutator::new(havoc_mutations());
let mutational = StdMutationalStage::new(mutator);
// The shared memory for the concolic runtime to write its trace to
let mut concolic_shmem = StdShMemProvider::new()
.unwrap()
.new_map(DEFAULT_SIZE)
.unwrap();
concolic_shmem.write_to_env(DEFAULT_ENV_NAME).unwrap();
// The concolic observer observers the concolic shared memory map.
let concolic_observer = ConcolicObserver::new("concolic".to_string(), concolic_shmem.map_mut());
let concolic_observer_name = concolic_observer.name().to_string();
// The order of the stages matter!
let mut stages = tuple_list!(
// Create a concolic trace
ConcolicTracingStage::new(
TracingStage::new(
MyCommandConfigurator::default().into_executor(tuple_list!(concolic_observer))
),
concolic_observer_name,
),
// Use the concolic trace for z3-based solving
SimpleConcolicMutationalStage::default(),
tracing,
i2s,
mutational
);
fuzzer.fuzz_loop(&mut stages, &mut executor, &mut state, &mut restarting_mgr)?;
// Never reached
Ok(())
}
use std::process::{Child, Command, Stdio};
#[derive(Default)]
pub struct MyCommandConfigurator;
impl<EM, I, S, Z> CommandConfigurator<EM, I, S, Z> for MyCommandConfigurator
where
I: HasTargetBytes + Input,
{
fn spawn_child(
&mut self,
_fuzzer: &mut Z,
_state: &mut S,
_mgr: &mut EM,
input: &I,
) -> Result<Child, Error> {
input.to_file("cur_input")?;
Ok(Command::new("./target_symcc.out")
.arg("cur_input")
.stdin(Stdio::null())
.stdout(Stdio::null())
.stderr(Stdio::null())
.spawn()
.expect("failed to start process"))
}
}

File diff suppressed because it is too large Load Diff

View File

@ -0,0 +1,22 @@
[package]
name = "example_runtime"
version = "0.1.0"
edition = "2018"
[lib]
# the runtime needs to be a shared object -> cdylib
crate-type = ["cdylib"]
# this is necessary for SymCC to find the runtime.
name = "SymRuntime"
[profile.release]
lto = true
codegen-units = 1
opt-level = 3
# this is somewhat important to ensure the runtime does not unwind into the target program.
panic = "abort"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
symcc_runtime = { path = "../../../libafl_concolic/symcc_runtime" }

View File

@ -0,0 +1,20 @@
use symcc_runtime::{
export_runtime,
filter::NoFloat,
tracing::{self, StdShMemMessageFileWriter},
Runtime,
};
//! This is a basic SymCC runtime.
//! It traces the execution to the shared memory region that should be passed through the environment by the fuzzer process.
//! Additionally, it concretizes all floating point operations for simplicity.
//! Refer to the `symcc_runtime` crate documentation for building your own runtime.
export_runtime!(
NoFloat => NoFloat;
tracing::TracingRuntime::new(
StdShMemMessageFileWriter::from_stdshmem_default_env()
.expect("unable to construct tracing runtime writer. (missing env?)")
)
=> tracing::TracingRuntime
);

View File

@ -37,7 +37,7 @@ harness = false
[features]
default = ["std", "anymap_debug", "derive", "llmp_compression"]
std = ["serde_json", "hostname", "core_affinity", "nix", "serde/std"] # print, env, launcher ... support
std = ["serde_json", "hostname", "core_affinity", "nix", "serde/std", "bincode", "wait-timeout"] # print, env, launcher ... support
anymap_debug = ["serde_json"] # uses serde_json to Debug the anymap trait. Disable for smaller footprint.
derive = ["libafl_derive"] # provide derive(SerdeAny) macro.
rand_trait = ["rand_core"] # If set, libafl's rand implementations will implement `rand::Rng`
@ -46,6 +46,7 @@ llmp_compression = ["miniz_oxide"] # llmp compression using GZip
llmp_debug = ["backtrace"] # Enables debug output for LLMP
llmp_small_maps = [] # reduces initial map size for llmp
introspection = [] # Include performance statistics of the fuzzing pipeline
concolic_mutation = ["z3"] # include a simple concolic mutator based on z3
[[example]]
name = "llmp_test"
@ -60,6 +61,7 @@ xxhash-rust = { version = "0.8.2", features = ["xxh3"] } # xxh3 hashing for rust
serde = { version = "1.0", default-features = false, features = ["alloc"] } # serialization lib
erased-serde = { version = "0.3.12", default-features = false, features = ["alloc"] } # erased serde
postcard = { version = "0.5.1", features = ["alloc"] } # no_std compatible serde serialization fromat
bincode = {version = "1.3", optional = true }
static_assertions = "1.1.0"
ctor = "0.1.20"
num_enum = { version = "0.5.1", default-features = false }
@ -76,6 +78,10 @@ rand_core = { version = "0.6.2", optional = true } # This dependency allows us t
nix = { version = "0.20.0", optional = true }
libm = "0.2.1"
wait-timeout = { version = "0.2", optional = true } # used by CommandExecutor to wait for child process
z3 = { version = "0.10", optional = true } # for concolic mutation
[target.'cfg(target_os = "android")'.dependencies]
backtrace = { version = "0.3", optional = true, default-features = false, features = ["std", "libbacktrace"] } # for llmp_debug

View File

@ -2,6 +2,7 @@
use serde::{Deserialize, Deserializer, Serialize, Serializer};
use alloc;
use alloc::boxed::Box;
use core::any::{Any, TypeId};
@ -35,6 +36,8 @@ pub trait SerdeAny: Any + erased_serde::Serialize {
fn as_any(&self) -> &dyn Any;
/// returns this as mutable Any trait
fn as_any_mut(&mut self) -> &mut dyn Any;
/// returns this as boxed Any trait
fn as_any_boxed(self: Box<Self>) -> Box<dyn Any>;
}
/// Wrap a type for serialization
@ -239,6 +242,18 @@ macro_rules! create_serde_registry_for_trait {
.map(|x| x.as_mut().as_any_mut().downcast_mut::<T>().unwrap())
}
/// Remove an element in the map. Returns the removed element.
#[must_use]
#[inline]
pub fn remove<T>(&mut self) -> Option<Box<T>>
where
T: $trait_name,
{
self.map
.remove(&unpack_type_id(TypeId::of::<T>()))
.map(|x| x.as_any_boxed().downcast::<T>().unwrap())
}
/// Insert an element into the map.
#[inline]
pub fn insert<T>(&mut self, t: T)
@ -249,6 +264,15 @@ macro_rules! create_serde_registry_for_trait {
.insert(unpack_type_id(TypeId::of::<T>()), Box::new(t));
}
/// Insert a boxed element into the map.
#[inline]
pub fn insert_boxed<T>(&mut self, t: Box<T>)
where
T: $trait_name,
{
self.map.insert(unpack_type_id(TypeId::of::<T>()), t);
}
/// Returns the count of elements in this map.
#[must_use]
#[inline]
@ -574,11 +598,17 @@ pub use serdeany_registry::*;
macro_rules! impl_serdeany {
($struct_name:ident) => {
impl $crate::bolts::serdeany::SerdeAny for $struct_name {
fn as_any(&self) -> &dyn core::any::Any {
fn as_any(&self) -> &dyn ::core::any::Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn core::any::Any {
fn as_any_mut(&mut self) -> &mut dyn ::core::any::Any {
self
}
fn as_any_boxed(
self: ::std::boxed::Box<Self>,
) -> ::std::boxed::Box<dyn ::core::any::Any> {
self
}
}
@ -596,11 +626,17 @@ macro_rules! impl_serdeany {
macro_rules! impl_serdeany {
($struct_name:ident) => {
impl $crate::bolts::serdeany::SerdeAny for $struct_name {
fn as_any(&self) -> &dyn core::any::Any {
fn as_any(&self) -> &dyn ::core::any::Any {
self
}
fn as_any_mut(&mut self) -> &mut dyn core::any::Any {
fn as_any_mut(&mut self) -> &mut dyn ::core::any::Any {
self
}
fn as_any_boxed(
self: ::alloc::boxed::Box<Self>,
) -> ::alloc::boxed::Box<dyn ::core::any::Any> {
self
}
}

View File

@ -1,6 +1,13 @@
//! A generic sharememory region to be used by any functions (queues or feedbacks
// too.)
use alloc::{rc::Rc, string::ToString};
use core::{
cell::RefCell,
fmt::{self, Debug, Display},
mem::ManuallyDrop,
};
#[cfg(all(feature = "std", unix))]
pub use unix_shmem::{UnixShMem, UnixShMemProvider};
/// The default [`ShMemProvider`] for this os.
@ -35,13 +42,9 @@ pub type StdShMem = OsShMem;
use serde::{Deserialize, Serialize};
#[cfg(feature = "std")]
use std::env;
use alloc::{rc::Rc, string::ToString};
use core::{
cell::RefCell,
fmt::{self, Debug, Display},
mem::ManuallyDrop,
use std::{
convert::{TryFrom, TryInto},
env,
};
#[cfg(all(unix, feature = "std"))]
@ -995,3 +998,88 @@ pub mod win32_shmem {
}
}
}
/// A cursor around [`ShMem`] that immitates [`std::io::Cursor`]. Notably, this implements [`Write`] for [`ShMem`] in std environments.
pub struct ShMemCursor<T: ShMem> {
inner: T,
pos: usize,
}
impl<T: ShMem> ShMemCursor<T> {
pub fn new(shmem: T) -> Self {
Self {
inner: shmem,
pos: 0,
}
}
/// Slice from the current location on this map to the end, mutable
fn empty_slice_mut(&mut self) -> &mut [u8] {
&mut (self.inner.map_mut()[self.pos..])
}
}
#[cfg(feature = "std")]
impl<T: ShMem> std::io::Write for ShMemCursor<T> {
fn write(&mut self, buf: &[u8]) -> std::io::Result<usize> {
match self.empty_slice_mut().write(buf) {
Ok(w) => {
self.pos += w;
Ok(w)
}
Err(e) => Err(e),
}
}
fn flush(&mut self) -> std::io::Result<()> {
Ok(())
}
fn write_vectored(&mut self, bufs: &[std::io::IoSlice<'_>]) -> std::io::Result<usize> {
match self.empty_slice_mut().write_vectored(bufs) {
Ok(w) => {
self.pos += w;
Ok(w)
}
Err(e) => Err(e),
}
}
fn write_all(&mut self, buf: &[u8]) -> std::io::Result<()> {
match self.empty_slice_mut().write_all(buf) {
Ok(w) => {
self.pos += buf.len();
Ok(w)
}
Err(e) => Err(e),
}
}
}
#[cfg(feature = "std")]
impl<T: ShMem> std::io::Seek for ShMemCursor<T> {
fn seek(&mut self, pos: std::io::SeekFrom) -> std::io::Result<u64> {
let effective_new_pos = match pos {
std::io::SeekFrom::Start(s) => s,
std::io::SeekFrom::End(offset) => {
let map_len = self.inner.map().len();
assert!(i64::try_from(map_len).is_ok());
let signed_pos = map_len as i64;
let effective = signed_pos.checked_add(offset).unwrap();
assert!(effective >= 0);
effective.try_into().unwrap()
}
std::io::SeekFrom::Current(offset) => {
let current_pos = self.pos;
assert!(i64::try_from(current_pos).is_ok());
let signed_pos = current_pos as i64;
let effective = signed_pos.checked_add(offset).unwrap();
assert!(effective >= 0);
effective.try_into().unwrap()
}
};
assert!(usize::try_from(effective_new_pos).is_ok());
self.pos = effective_new_pos as usize;
Ok(effective_new_pos)
}
}

View File

@ -0,0 +1,132 @@
use core::marker::PhantomData;
#[cfg(feature = "std")]
use std::{process::Child, time::Duration};
use crate::{
executors::{Executor, ExitKind, HasObservers},
inputs::Input,
observers::ObserversTuple,
Error,
};
/// A `CommandExecutor` is a wrapper around [`std::process::Command`] to execute a target as a child process.
/// Construct a `CommandExecutor` by implementing [`CommandConfigurator`] for a type of your choice and calling [`CommandConfigurator::into_executor`] on it.
pub struct CommandExecutor<EM, I, S, Z, T, OT> {
inner: T,
observers: OT,
phantom: PhantomData<(EM, I, S, Z)>,
}
// this only works on unix because of the reliance on checking the process signal for detecting OOM
#[cfg(all(feature = "std", unix))]
impl<EM, I, S, Z, T, OT> Executor<EM, I, S, Z> for CommandExecutor<EM, I, S, Z, T, OT>
where
I: Input,
T: CommandConfigurator<EM, I, S, Z>,
OT: ObserversTuple<I, S>,
{
fn run_target(
&mut self,
_fuzzer: &mut Z,
_state: &mut S,
_mgr: &mut EM,
input: &I,
) -> Result<ExitKind, Error> {
use std::os::unix::prelude::ExitStatusExt;
use wait_timeout::ChildExt;
let mut child = self.inner.spawn_child(_fuzzer, _state, _mgr, input)?;
match child
.wait_timeout(Duration::from_secs(5))
.expect("waiting on child failed")
.map(|status| status.signal())
{
// for reference: https://www.man7.org/linux/man-pages/man7/signal.7.html
Some(Some(9)) => Ok(ExitKind::Oom),
Some(Some(_)) => Ok(ExitKind::Crash),
Some(None) => Ok(ExitKind::Ok),
None => {
// if this fails, there is not much we can do. let's hope it failed because the process finished
// in the meantime.
drop(child.kill());
// finally, try to wait to properly clean up system ressources.
drop(child.wait());
Ok(ExitKind::Timeout)
}
}
}
}
#[cfg(feature = "std")]
impl<EM, I, S, Z, T, OT> HasObservers<I, OT, S> for CommandExecutor<EM, I, S, Z, T, OT>
where
I: Input,
OT: ObserversTuple<I, S>,
T: CommandConfigurator<EM, I, S, Z>,
{
#[inline]
fn observers(&self) -> &OT {
&self.observers
}
#[inline]
fn observers_mut(&mut self) -> &mut OT {
&mut self.observers
}
}
/// A `CommandConfigurator` takes care of creating and spawning a [`std::process::Command`] for the [`CommandExecutor`].
/// # Example
/// ```
/// # use std::{io::Write, process::{Stdio, Command, Child}};
/// # use libafl::{Error, inputs::{Input, HasTargetBytes}, executors::{Executor, command::CommandConfigurator}};
/// struct MyExecutor;
///
/// impl<EM, I: Input + HasTargetBytes, S, Z> CommandConfigurator<EM, I, S, Z> for MyExecutor {
/// fn spawn_child(
/// &mut self,
/// fuzzer: &mut Z,
/// state: &mut S,
/// mgr: &mut EM,
/// input: &I,
/// ) -> Result<Child, Error> {
/// let mut command = Command::new("../if");
/// command
/// .stdin(Stdio::piped())
/// .stdout(Stdio::null())
/// .stderr(Stdio::null());
///
/// let child = command.spawn().expect("failed to start process");
/// let mut stdin = child.stdin.as_ref().unwrap();
/// stdin.write_all(input.target_bytes().as_slice())?;
/// Ok(child)
/// }
/// }
///
/// fn make_executor<EM, I: Input + HasTargetBytes, S, Z>() -> impl Executor<EM, I, S, Z> {
/// MyExecutor.into_executor(())
/// }
/// ```
#[cfg(feature = "std")]
pub trait CommandConfigurator<EM, I: Input, S, Z>: Sized {
fn spawn_child(
&mut self,
fuzzer: &mut Z,
state: &mut S,
mgr: &mut EM,
input: &I,
) -> Result<Child, Error>;
fn into_executor<OT>(self, observers: OT) -> CommandExecutor<EM, I, S, Z, Self, OT>
where
OT: ObserversTuple<I, S>,
{
CommandExecutor {
inner: self,
observers,
phantom: PhantomData,
}
}
}

View File

@ -16,6 +16,12 @@ pub use combined::CombinedExecutor;
pub mod shadow;
pub use shadow::ShadowExecutor;
pub mod with_observers;
pub use with_observers::WithObservers;
pub mod command;
pub use command::CommandExecutor;
use crate::{
inputs::{HasTargetBytes, Input},
observers::ObserversTuple,
@ -64,6 +70,18 @@ where
mgr: &mut EM,
input: &I,
) -> Result<ExitKind, Error>;
/// Wraps this Executor with the given [`ObserversTuple`] to implement [`HasObservers`].
///
/// If the executor already implements [`HasObservers`], then the original implementation will be overshadowed by
/// the implementation of this wrapper.
fn with_observers<OT>(self, observers: OT) -> WithObservers<Self, OT>
where
Self: Sized,
OT: ObserversTuple<I, S>,
{
WithObservers::new(self, observers)
}
}
/// A simple executor that does nothing.

View File

@ -0,0 +1,52 @@
use crate::{inputs::Input, observers::ObserversTuple, Error};
use super::{Executor, ExitKind, HasObservers};
/// A wrapper for any [`Executor`] to make it implement [`HasObservers`] using a given [`ObserversTuple`].
pub struct WithObservers<E, OT> {
executor: E,
observers: OT,
}
impl<EM, I, S, Z, E, OT> Executor<EM, I, S, Z> for WithObservers<E, OT>
where
I: Input,
E: Executor<EM, I, S, Z>,
{
fn run_target(
&mut self,
fuzzer: &mut Z,
state: &mut S,
mgr: &mut EM,
input: &I,
) -> Result<ExitKind, Error> {
self.executor.run_target(fuzzer, state, mgr, input)
}
}
impl<I, S, E, OT> HasObservers<I, OT, S> for WithObservers<E, OT>
where
I: Input,
OT: ObserversTuple<I, S>,
{
fn observers(&self) -> &OT {
&self.observers
}
fn observers_mut(&mut self) -> &mut OT {
&mut self.observers
}
}
impl<E, OT> WithObservers<E, OT> {
/// Wraps the given [`Executor`] with the given [`ObserversTuple`] to implement [`HasObservers`].
///
/// If the executor already implements [`HasObservers`], then the original implementation will be overshadowed by
/// the implementation of this wrapper.
pub fn new(executor: E, observers: OT) -> Self {
Self {
executor,
observers,
}
}
}

View File

@ -0,0 +1,79 @@
use crate::{
bolts::tuples::Named,
corpus::Testcase,
events::EventFirer,
executors::ExitKind,
feedbacks::Feedback,
inputs::Input,
observers::{
concolic::{ConcolicMetadata, ConcolicObserver},
ObserversTuple,
},
state::{HasClientPerfStats, HasMetadata},
Error,
};
/// The concolic feedback. It is used to attach concolic tracing metadata to the testcase.
/// This feedback should be used in combination with another feedback as this feedback always considers testcases
/// to be not interesting.
/// Requires a [`ConcolicObserver`] to observe the concolic trace.
pub struct ConcolicFeedback {
name: String,
metadata: Option<ConcolicMetadata>,
}
impl ConcolicFeedback {
#[allow(unused)]
#[must_use]
pub fn from_observer(observer: &ConcolicObserver) -> Self {
Self {
name: observer.name().to_owned(),
metadata: None,
}
}
}
impl Named for ConcolicFeedback {
fn name(&self) -> &str {
&self.name
}
}
impl<I, S> Feedback<I, S> for ConcolicFeedback
where
I: Input,
S: HasClientPerfStats,
{
fn is_interesting<EM, OT>(
&mut self,
_state: &mut S,
_manager: &mut EM,
_input: &I,
observers: &OT,
_exit_kind: &ExitKind,
) -> Result<bool, Error>
where
EM: EventFirer<I, S>,
OT: ObserversTuple<I, S>,
{
self.metadata = observers
.match_name::<ConcolicObserver>(&self.name)
.map(ConcolicObserver::create_metadata_from_current_map);
Ok(false)
}
fn append_metadata(
&mut self,
_state: &mut S,
_testcase: &mut Testcase<I>,
) -> Result<(), Error> {
if let Some(metadata) = self.metadata.take() {
_testcase.metadata_mut().insert(metadata);
}
Ok(())
}
fn discard_metadata(&mut self, _state: &mut S, _input: &I) -> Result<(), Error> {
Ok(())
}
}

View File

@ -4,6 +4,11 @@
pub mod map;
pub use map::*;
#[cfg(feature = "std")]
pub mod concolic;
#[cfg(feature = "std")]
pub use concolic::ConcolicFeedback;
use alloc::string::{String, ToString};
use serde::{Deserialize, Serialize};

View File

@ -0,0 +1,23 @@
use crate::observers::concolic::{serialization_format::MessageFileReader, SymExpr, SymExprRef};
use serde::{Deserialize, Serialize};
/// A metadata holding a buffer of a concolic trace.
#[derive(Default, Serialize, Deserialize, Debug)]
pub struct ConcolicMetadata {
/// Constraints data
buffer: Vec<u8>,
}
impl ConcolicMetadata {
/// Iterates over all messages in the buffer. Does not consume the buffer.
pub fn iter_messages(&self) -> impl Iterator<Item = (SymExprRef, SymExpr)> + '_ {
let mut parser = MessageFileReader::from_buffer(&self.buffer);
std::iter::from_fn(move || parser.next_message()).flatten()
}
pub(crate) fn from_buffer(buffer: Vec<u8>) -> Self {
Self { buffer }
}
}
crate::impl_serdeany!(ConcolicMetadata);

View File

@ -0,0 +1,334 @@
use core::num::NonZeroUsize;
#[cfg(feature = "std")]
use serde::{Deserialize, Serialize};
pub type SymExprRef = NonZeroUsize;
#[cfg(feature = "std")]
#[derive(Serialize, Deserialize, Debug)]
pub enum SymExpr {
GetInputByte {
offset: usize,
},
BuildInteger {
value: u64,
bits: u8,
},
BuildInteger128 {
high: u64,
low: u64,
},
BuildFloat {
value: f64,
is_double: bool,
},
BuildNullPointer,
BuildTrue,
BuildFalse,
BuildBool {
value: bool,
},
BuildNeg {
op: SymExprRef,
},
BuildAdd {
a: SymExprRef,
b: SymExprRef,
},
BuildSub {
a: SymExprRef,
b: SymExprRef,
},
BuildMul {
a: SymExprRef,
b: SymExprRef,
},
BuildUnsignedDiv {
a: SymExprRef,
b: SymExprRef,
},
BuildSignedDiv {
a: SymExprRef,
b: SymExprRef,
},
BuildUnsignedRem {
a: SymExprRef,
b: SymExprRef,
},
BuildSignedRem {
a: SymExprRef,
b: SymExprRef,
},
BuildShiftLeft {
a: SymExprRef,
b: SymExprRef,
},
BuildLogicalShiftRight {
a: SymExprRef,
b: SymExprRef,
},
BuildArithmeticShiftRight {
a: SymExprRef,
b: SymExprRef,
},
BuildSignedLessThan {
a: SymExprRef,
b: SymExprRef,
},
BuildSignedLessEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildSignedGreaterThan {
a: SymExprRef,
b: SymExprRef,
},
BuildSignedGreaterEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildUnsignedLessThan {
a: SymExprRef,
b: SymExprRef,
},
BuildUnsignedLessEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildUnsignedGreaterThan {
a: SymExprRef,
b: SymExprRef,
},
BuildUnsignedGreaterEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildNot {
op: SymExprRef,
},
BuildEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildNotEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildBoolAnd {
a: SymExprRef,
b: SymExprRef,
},
BuildBoolOr {
a: SymExprRef,
b: SymExprRef,
},
BuildBoolXor {
a: SymExprRef,
b: SymExprRef,
},
BuildAnd {
a: SymExprRef,
b: SymExprRef,
},
BuildOr {
a: SymExprRef,
b: SymExprRef,
},
BuildXor {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatOrdered {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatOrderedGreaterThan {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatOrderedGreaterEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatOrderedLessThan {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatOrderedLessEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatOrderedEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatOrderedNotEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatUnordered {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatUnorderedGreaterThan {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatUnorderedGreaterEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatUnorderedLessThan {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatUnorderedLessEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatUnorderedEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatUnorderedNotEqual {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatAbs {
op: SymExprRef,
},
BuildFloatAdd {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatSub {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatMul {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatDiv {
a: SymExprRef,
b: SymExprRef,
},
BuildFloatRem {
a: SymExprRef,
b: SymExprRef,
},
BuildSext {
op: SymExprRef,
bits: u8,
},
BuildZext {
op: SymExprRef,
bits: u8,
},
BuildTrunc {
op: SymExprRef,
bits: u8,
},
BuildIntToFloat {
op: SymExprRef,
is_double: bool,
is_signed: bool,
},
BuildFloatToFloat {
op: SymExprRef,
to_double: bool,
},
BuildBitsToFloat {
op: SymExprRef,
to_double: bool,
},
BuildFloatToBits {
op: SymExprRef,
},
BuildFloatToSignedInteger {
op: SymExprRef,
bits: u8,
},
BuildFloatToUnsignedInteger {
op: SymExprRef,
bits: u8,
},
BuildBoolToBits {
op: SymExprRef,
bits: u8,
},
ConcatHelper {
a: SymExprRef,
b: SymExprRef,
},
ExtractHelper {
op: SymExprRef,
first_bit: usize,
last_bit: usize,
},
BuildExtract {
op: SymExprRef,
offset: u64,
length: u64,
little_endian: bool,
},
BuildBswap {
op: SymExprRef,
},
BuildInsert {
target: SymExprRef,
to_insert: SymExprRef,
offset: u64,
little_endian: bool,
},
PushPathConstraint {
constraint: SymExprRef,
taken: bool,
site_id: usize,
},
/// These expressions won't be referenced again
ExpressionsUnreachable {
exprs: Vec<SymExprRef>,
},
/// This marks the end of the trace.
End,
}
#[cfg(feature = "std")]
pub mod serialization_format;
/// The environment name used to identify the hitmap for the concolic runtime.
pub const HITMAP_ENV_NAME: &str = "LIBAFL_CONCOLIC_HITMAP";
/// The name of the environment variable that contains the byte offsets to be symbolized.
pub const SELECTIVE_SYMBOLICATION_ENV_NAME: &str = "LIBAFL_SELECTIVE_SYMBOLICATION";
/// The name of the environment variable that contains the byte offsets to be symbolized.
pub const NO_FLOAT_ENV_NAME: &str = "LIBAFL_CONCOLIC_NO_FLOAT";
/// The name of the environment variable that contains the byte offsets to be symbolized.
pub const EXPRESSION_PRUNING: &str = "LIBAFL_CONCOLIC_EXPRESSION_PRUNING";
#[cfg(feature = "std")]
mod metadata;
#[cfg(feature = "std")]
pub use metadata::ConcolicMetadata;
#[cfg(feature = "std")]
mod observer;
#[cfg(feature = "std")]
pub use observer::ConcolicObserver;

View File

@ -0,0 +1,41 @@
use crate::{
bolts::tuples::Named,
observers::{
concolic::{serialization_format::MessageFileReader, ConcolicMetadata},
Observer,
},
};
use serde::{Deserialize, Serialize};
/// A standard [`ConcolicObserver`] observer, observing constraints written into a memory buffer.
#[derive(Serialize, Deserialize, Debug)]
pub struct ConcolicObserver<'map> {
#[serde(skip)]
map: &'map [u8],
name: String,
}
impl<'map, I, S> Observer<I, S> for ConcolicObserver<'map> {}
impl<'map> ConcolicObserver<'map> {
#[must_use]
pub fn create_metadata_from_current_map(&self) -> ConcolicMetadata {
let reader = MessageFileReader::from_length_prefixed_buffer(self.map)
.expect("constructing the message reader from a memory buffer should not fail");
ConcolicMetadata::from_buffer(reader.get_buffer().to_vec())
}
}
impl<'map> Named for ConcolicObserver<'map> {
fn name(&self) -> &str {
&self.name
}
}
impl<'map> ConcolicObserver<'map> {
/// Creates a new [`ConcolicObserver`] with the given name and memory buffer.
#[must_use]
pub fn new(name: String, map: &'map [u8]) -> Self {
Self { map, name }
}
}

View File

@ -0,0 +1,368 @@
#![cfg(feature = "std")]
use std::io::{self, Read, Seek, SeekFrom, Write};
use bincode::{DefaultOptions, Options};
use super::{SymExpr, SymExprRef};
pub use bincode::ErrorKind;
pub use bincode::Result;
fn serialization_options() -> DefaultOptions {
DefaultOptions::new()
}
pub struct MessageFileReader<R: Read> {
reader: R,
deserializer_config: bincode::DefaultOptions,
current_id: usize,
}
impl<R: Read> MessageFileReader<R> {
pub fn from_reader(reader: R) -> Self {
Self {
reader,
deserializer_config: serialization_options(),
current_id: 1,
}
}
pub fn next_message(&mut self) -> Option<bincode::Result<(SymExprRef, SymExpr)>> {
match self.deserializer_config.deserialize_from(&mut self.reader) {
Ok(mut message) => {
if let SymExpr::End = message {
None
} else {
let message_id = self.transform_message(&mut message);
Some(Ok((message_id, message)))
}
}
Err(e) => match *e {
bincode::ErrorKind::Io(ref io_err) => match io_err.kind() {
io::ErrorKind::UnexpectedEof => None,
_ => Some(Err(e)),
},
_ => Some(Err(e)),
},
}
}
fn make_absolute(&self, expr: SymExprRef) -> SymExprRef {
SymExprRef::new(self.current_id - expr.get()).unwrap()
}
fn transform_message(&mut self, message: &mut SymExpr) -> SymExprRef {
let ret = self.current_id;
match message {
SymExpr::GetInputByte { .. }
| SymExpr::BuildInteger { .. }
| SymExpr::BuildInteger128 { .. }
| SymExpr::BuildFloat { .. }
| SymExpr::BuildNullPointer
| SymExpr::BuildTrue
| SymExpr::BuildFalse
| SymExpr::BuildBool { .. } => {
self.current_id += 1;
}
SymExpr::BuildNeg { op }
| SymExpr::BuildFloatAbs { op }
| SymExpr::BuildNot { op }
| SymExpr::BuildSext { op, .. }
| SymExpr::BuildZext { op, .. }
| SymExpr::BuildTrunc { op, .. }
| SymExpr::BuildIntToFloat { op, .. }
| SymExpr::BuildFloatToFloat { op, .. }
| SymExpr::BuildBitsToFloat { op, .. }
| SymExpr::BuildFloatToBits { op }
| SymExpr::BuildFloatToSignedInteger { op, .. }
| SymExpr::BuildFloatToUnsignedInteger { op, .. }
| SymExpr::BuildBoolToBits { op, .. }
| SymExpr::ExtractHelper { op, .. }
| SymExpr::BuildExtract { op, .. }
| SymExpr::BuildBswap { op } => {
*op = self.make_absolute(*op);
self.current_id += 1;
}
SymExpr::BuildAdd { a, b }
| SymExpr::BuildSub { a, b }
| SymExpr::BuildMul { a, b }
| SymExpr::BuildUnsignedDiv { a, b }
| SymExpr::BuildSignedDiv { a, b }
| SymExpr::BuildUnsignedRem { a, b }
| SymExpr::BuildSignedRem { a, b }
| SymExpr::BuildShiftLeft { a, b }
| SymExpr::BuildLogicalShiftRight { a, b }
| SymExpr::BuildArithmeticShiftRight { a, b }
| SymExpr::BuildSignedLessThan { a, b }
| SymExpr::BuildSignedLessEqual { a, b }
| SymExpr::BuildSignedGreaterThan { a, b }
| SymExpr::BuildSignedGreaterEqual { a, b }
| SymExpr::BuildUnsignedLessThan { a, b }
| SymExpr::BuildUnsignedLessEqual { a, b }
| SymExpr::BuildUnsignedGreaterThan { a, b }
| SymExpr::BuildUnsignedGreaterEqual { a, b }
| SymExpr::BuildEqual { a, b }
| SymExpr::BuildNotEqual { a, b }
| SymExpr::BuildBoolAnd { a, b }
| SymExpr::BuildBoolOr { a, b }
| SymExpr::BuildBoolXor { a, b }
| SymExpr::BuildAnd { a, b }
| SymExpr::BuildOr { a, b }
| SymExpr::BuildXor { a, b }
| SymExpr::BuildFloatOrdered { a, b }
| SymExpr::BuildFloatOrderedGreaterThan { a, b }
| SymExpr::BuildFloatOrderedGreaterEqual { a, b }
| SymExpr::BuildFloatOrderedLessThan { a, b }
| SymExpr::BuildFloatOrderedLessEqual { a, b }
| SymExpr::BuildFloatOrderedEqual { a, b }
| SymExpr::BuildFloatOrderedNotEqual { a, b }
| SymExpr::BuildFloatUnordered { a, b }
| SymExpr::BuildFloatUnorderedGreaterThan { a, b }
| SymExpr::BuildFloatUnorderedGreaterEqual { a, b }
| SymExpr::BuildFloatUnorderedLessThan { a, b }
| SymExpr::BuildFloatUnorderedLessEqual { a, b }
| SymExpr::BuildFloatUnorderedEqual { a, b }
| SymExpr::BuildFloatUnorderedNotEqual { a, b }
| SymExpr::BuildFloatAdd { a, b }
| SymExpr::BuildFloatSub { a, b }
| SymExpr::BuildFloatMul { a, b }
| SymExpr::BuildFloatDiv { a, b }
| SymExpr::BuildFloatRem { a, b }
| SymExpr::ConcatHelper { a, b }
| SymExpr::BuildInsert {
target: a,
to_insert: b,
..
} => {
*a = self.make_absolute(*a);
*b = self.make_absolute(*b);
self.current_id += 1;
}
SymExpr::PushPathConstraint { constraint: op, .. } => {
*op = self.make_absolute(*op);
}
SymExpr::ExpressionsUnreachable { exprs } => {
for expr in exprs {
*expr = self.make_absolute(*expr);
}
}
SymExpr::End => {
panic!("should not pass End message to this function");
}
}
SymExprRef::new(ret).unwrap()
}
}
pub struct MessageFileWriter<W: Write> {
id_counter: usize,
writer: W,
writer_start_position: u64,
serialization_options: DefaultOptions,
}
impl<W: Write + Seek> MessageFileWriter<W> {
pub fn from_writer(mut writer: W) -> io::Result<Self> {
let writer_start_position = writer.stream_position()?;
// write dummy trace length
writer.write_all(&0u64.to_le_bytes())?;
Ok(Self {
id_counter: 1,
writer,
writer_start_position,
serialization_options: serialization_options(),
})
}
fn write_trace_size(&mut self) -> io::Result<()> {
// calculate size of trace
let end_pos = self.writer.stream_position()?;
let trace_header_len = 0u64.to_le_bytes().len() as u64;
assert!(end_pos > self.writer_start_position + trace_header_len);
let trace_length = end_pos - self.writer_start_position - trace_header_len;
// write trace size to beginning of trace
self.writer
.seek(SeekFrom::Start(self.writer_start_position))?;
self.writer.write_all(&trace_length.to_le_bytes())?;
// rewind to previous position
self.writer.seek(SeekFrom::Start(end_pos))?;
Ok(())
}
pub fn end(&mut self) -> io::Result<()> {
self.write_trace_size()?;
Ok(())
}
fn make_relative(&self, expr: SymExprRef) -> SymExprRef {
SymExprRef::new(self.id_counter - expr.get()).unwrap()
}
#[allow(clippy::too_many_lines)]
pub fn write_message(&mut self, mut message: SymExpr) -> bincode::Result<SymExprRef> {
let current_id = self.id_counter;
match &mut message {
SymExpr::GetInputByte { .. }
| SymExpr::BuildInteger { .. }
| SymExpr::BuildInteger128 { .. }
| SymExpr::BuildFloat { .. }
| SymExpr::BuildNullPointer
| SymExpr::BuildTrue
| SymExpr::BuildFalse
| SymExpr::BuildBool { .. } => {
self.id_counter += 1;
}
SymExpr::BuildNeg { op }
| SymExpr::BuildFloatAbs { op }
| SymExpr::BuildNot { op }
| SymExpr::BuildSext { op, .. }
| SymExpr::BuildZext { op, .. }
| SymExpr::BuildTrunc { op, .. }
| SymExpr::BuildIntToFloat { op, .. }
| SymExpr::BuildFloatToFloat { op, .. }
| SymExpr::BuildBitsToFloat { op, .. }
| SymExpr::BuildFloatToBits { op }
| SymExpr::BuildFloatToSignedInteger { op, .. }
| SymExpr::BuildFloatToUnsignedInteger { op, .. }
| SymExpr::BuildBoolToBits { op, .. }
| SymExpr::ExtractHelper { op, .. }
| SymExpr::BuildExtract { op, .. }
| SymExpr::BuildBswap { op } => {
*op = self.make_relative(*op);
self.id_counter += 1;
}
SymExpr::BuildAdd { a, b }
| SymExpr::BuildSub { a, b }
| SymExpr::BuildMul { a, b }
| SymExpr::BuildUnsignedDiv { a, b }
| SymExpr::BuildSignedDiv { a, b }
| SymExpr::BuildUnsignedRem { a, b }
| SymExpr::BuildSignedRem { a, b }
| SymExpr::BuildShiftLeft { a, b }
| SymExpr::BuildLogicalShiftRight { a, b }
| SymExpr::BuildArithmeticShiftRight { a, b }
| SymExpr::BuildSignedLessThan { a, b }
| SymExpr::BuildSignedLessEqual { a, b }
| SymExpr::BuildSignedGreaterThan { a, b }
| SymExpr::BuildSignedGreaterEqual { a, b }
| SymExpr::BuildUnsignedLessThan { a, b }
| SymExpr::BuildUnsignedLessEqual { a, b }
| SymExpr::BuildUnsignedGreaterThan { a, b }
| SymExpr::BuildUnsignedGreaterEqual { a, b }
| SymExpr::BuildEqual { a, b }
| SymExpr::BuildNotEqual { a, b }
| SymExpr::BuildBoolAnd { a, b }
| SymExpr::BuildBoolOr { a, b }
| SymExpr::BuildBoolXor { a, b }
| SymExpr::BuildAnd { a, b }
| SymExpr::BuildOr { a, b }
| SymExpr::BuildXor { a, b }
| SymExpr::BuildFloatOrdered { a, b }
| SymExpr::BuildFloatOrderedGreaterThan { a, b }
| SymExpr::BuildFloatOrderedGreaterEqual { a, b }
| SymExpr::BuildFloatOrderedLessThan { a, b }
| SymExpr::BuildFloatOrderedLessEqual { a, b }
| SymExpr::BuildFloatOrderedEqual { a, b }
| SymExpr::BuildFloatOrderedNotEqual { a, b }
| SymExpr::BuildFloatUnordered { a, b }
| SymExpr::BuildFloatUnorderedGreaterThan { a, b }
| SymExpr::BuildFloatUnorderedGreaterEqual { a, b }
| SymExpr::BuildFloatUnorderedLessThan { a, b }
| SymExpr::BuildFloatUnorderedLessEqual { a, b }
| SymExpr::BuildFloatUnorderedEqual { a, b }
| SymExpr::BuildFloatUnorderedNotEqual { a, b }
| SymExpr::BuildFloatAdd { a, b }
| SymExpr::BuildFloatSub { a, b }
| SymExpr::BuildFloatMul { a, b }
| SymExpr::BuildFloatDiv { a, b }
| SymExpr::BuildFloatRem { a, b }
| SymExpr::ConcatHelper { a, b }
| SymExpr::BuildInsert {
target: a,
to_insert: b,
..
} => {
*a = self.make_relative(*a);
*b = self.make_relative(*b);
self.id_counter += 1;
}
SymExpr::PushPathConstraint { constraint: op, .. } => {
*op = self.make_relative(*op);
}
SymExpr::ExpressionsUnreachable { exprs } => {
for expr in exprs {
*expr = self.make_relative(*expr);
}
}
SymExpr::End => {}
}
self.serialization_options
.serialize_into(&mut self.writer, &message)?;
// for every path constraint, make sure we can later decode it in case we crash by updating the trace header
if let SymExpr::PushPathConstraint { .. } = &message {
self.write_trace_size()?;
}
Ok(SymExprRef::new(current_id).unwrap())
}
}
pub mod shared_memory {
use std::{
convert::TryFrom,
io::{self, Cursor, Read},
};
use crate::bolts::shmem::{ShMem, ShMemCursor, ShMemProvider, StdShMemProvider};
use super::{MessageFileReader, MessageFileWriter};
pub const DEFAULT_ENV_NAME: &str = "SHARED_MEMORY_MESSAGES";
pub const DEFAULT_SIZE: usize = 1024 * 1024 * 1024;
impl<'buffer> MessageFileReader<Cursor<&'buffer [u8]>> {
#[must_use]
pub fn from_buffer(buffer: &'buffer [u8]) -> Self {
Self::from_reader(Cursor::new(buffer))
}
pub fn from_length_prefixed_buffer(mut buffer: &'buffer [u8]) -> io::Result<Self> {
let mut len_buf = 0u64.to_le_bytes();
buffer.read_exact(&mut len_buf)?;
let buffer_len = u64::from_le_bytes(len_buf);
assert!(usize::try_from(buffer_len).is_ok());
let buffer_len = buffer_len as usize;
let (buffer, _) = buffer.split_at(buffer_len);
Ok(Self::from_buffer(buffer))
}
#[must_use]
pub fn get_buffer(&self) -> &[u8] {
self.reader.get_ref()
}
}
impl<T: ShMem> MessageFileWriter<ShMemCursor<T>> {
pub fn from_shmem(shmem: T) -> io::Result<Self> {
Self::from_writer(ShMemCursor::new(shmem))
}
}
impl MessageFileWriter<ShMemCursor<<StdShMemProvider as ShMemProvider>::Mem>> {
pub fn from_stdshmem_env_with_name(env_name: impl AsRef<str>) -> io::Result<Self> {
Self::from_shmem(
StdShMemProvider::new()
.expect("unable to initialize StdShMemProvider")
.existing_from_env(env_name.as_ref())
.expect("unable to get shared memory from env"),
)
}
pub fn from_stdshmem_default_env() -> io::Result<Self> {
Self::from_stdshmem_env_with_name(DEFAULT_ENV_NAME)
}
}
pub type StdShMemMessageFileWriter =
MessageFileWriter<ShMemCursor<<StdShMemProvider as ShMemProvider>::Mem>>;
}

View File

@ -6,6 +6,8 @@ pub use map::*;
pub mod cmp;
pub use cmp::*;
pub mod concolic;
use alloc::string::{String, ToString};
use core::time::Duration;
use serde::{Deserialize, Serialize};

View File

@ -0,0 +1,430 @@
use core::marker::PhantomData;
use crate::{
corpus::Corpus,
executors::{Executor, HasObservers},
inputs::Input,
observers::{concolic::ConcolicObserver, ObserversTuple},
state::{HasClientPerfStats, HasCorpus, HasExecutions, HasMetadata},
Error,
};
use super::{Stage, TracingStage};
/// Wraps a [`TracingStage`] to add concolic observing.
#[derive(Clone, Debug)]
pub struct ConcolicTracingStage<C, EM, I, OT, S, TE, Z>
where
I: Input,
C: Corpus<I>,
TE: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
S: HasClientPerfStats + HasExecutions + HasCorpus<C, I>,
{
inner: TracingStage<C, EM, I, OT, S, TE, Z>,
observer_name: String,
}
impl<E, C, EM, I, OT, S, TE, Z> Stage<E, EM, S, Z> for ConcolicTracingStage<C, EM, I, OT, S, TE, Z>
where
I: Input,
C: Corpus<I>,
TE: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
S: HasClientPerfStats + HasExecutions + HasCorpus<C, I>,
{
#[inline]
fn perform(
&mut self,
fuzzer: &mut Z,
executor: &mut E,
state: &mut S,
manager: &mut EM,
corpus_idx: usize,
) -> Result<(), Error> {
self.inner
.perform(fuzzer, executor, state, manager, corpus_idx)?;
if let Some(observer) = self
.inner
.executor()
.observers()
.match_name::<ConcolicObserver>(&self.observer_name)
{
let metadata = observer.create_metadata_from_current_map();
state
.corpus_mut()
.get(corpus_idx)
.unwrap()
.borrow_mut()
.metadata_mut()
.insert(metadata);
}
Ok(())
}
}
impl<C, EM, I, OT, S, TE, Z> ConcolicTracingStage<C, EM, I, OT, S, TE, Z>
where
I: Input,
C: Corpus<I>,
TE: Executor<EM, I, S, Z> + HasObservers<I, OT, S>,
OT: ObserversTuple<I, S>,
S: HasClientPerfStats + HasExecutions + HasCorpus<C, I>,
{
/// Creates a new default tracing stage using the given [`Executor`], observing traces from a [`ConcolicObserver`] with the given name.
pub fn new(inner: TracingStage<C, EM, I, OT, S, TE, Z>, observer_name: String) -> Self {
Self {
inner,
observer_name,
}
}
}
#[cfg(feature = "concolic_mutation")]
use crate::{
inputs::HasBytesVec,
mark_feature_time,
observers::concolic::{ConcolicMetadata, SymExpr, SymExprRef},
start_timer, Evaluator,
};
#[cfg(feature = "introspection")]
use crate::stats::PerfFeature;
#[cfg(feature = "concolic_mutation")]
#[allow(clippy::too_many_lines)]
fn generate_mutations(iter: impl Iterator<Item = (SymExprRef, SymExpr)>) -> Vec<Vec<(usize, u8)>> {
use core::mem::size_of;
use hashbrown::HashMap;
use std::convert::TryInto;
use z3::{
ast::{Ast, Bool, Dynamic, BV},
Config, Context, Solver, Symbol,
};
fn build_extract<'ctx>(
bv: &BV<'ctx>,
offset: u64,
length: u64,
little_endian: bool,
) -> BV<'ctx> {
let size = u64::from(bv.get_size());
assert_eq!(
size % 8,
0,
"can't extract on byte-boundary on BV that is not byte-sized"
);
if little_endian {
(0..length)
.map(|i| {
bv.extract(
(size - (offset + i) * 8 - 1).try_into().unwrap(),
(size - (offset + i + 1) * 8).try_into().unwrap(),
)
})
.reduce(|acc, next| next.concat(&acc))
.unwrap()
} else {
bv.extract(
(size - offset * 8 - 1).try_into().unwrap(),
(size - (offset + length) * 8).try_into().unwrap(),
)
}
}
let mut res = Vec::new();
let ctx = Context::new(&Config::new());
let solver = Solver::new(&ctx);
let mut translation = HashMap::<SymExprRef, Dynamic>::new();
macro_rules! bool {
($op:ident) => {
translation[&$op].as_bool().unwrap()
};
}
macro_rules! bv {
($op:ident) => {
translation[&$op].as_bv().unwrap()
};
}
macro_rules! bv_binop {
($a:ident $op:tt $b:ident) => {
Some(bv!($a).$op(&bv!($b)).into())
};
}
for (id, msg) in iter {
let z3_expr: Option<Dynamic> = match msg {
SymExpr::GetInputByte { offset } => {
Some(BV::new_const(&ctx, Symbol::Int(offset as u32), 8).into())
}
SymExpr::BuildInteger { value, bits } => {
Some(BV::from_u64(&ctx, value, u32::from(bits)).into())
}
SymExpr::BuildInteger128 { high: _, low: _ } => todo!(),
SymExpr::BuildNullPointer => {
Some(BV::from_u64(&ctx, 0, (8 * size_of::<usize>()) as u32).into())
}
SymExpr::BuildTrue => Some(Bool::from_bool(&ctx, true).into()),
SymExpr::BuildFalse => Some(Bool::from_bool(&ctx, false).into()),
SymExpr::BuildBool { value } => Some(Bool::from_bool(&ctx, value).into()),
SymExpr::BuildNeg { op } => Some(bv!(op).bvneg().into()),
SymExpr::BuildAdd { a, b } => bv_binop!(a bvadd b),
SymExpr::BuildSub { a, b } => bv_binop!(a bvsub b),
SymExpr::BuildMul { a, b } => bv_binop!(a bvmul b),
SymExpr::BuildUnsignedDiv { a, b } => bv_binop!(a bvudiv b),
SymExpr::BuildSignedDiv { a, b } => bv_binop!(a bvsdiv b),
SymExpr::BuildUnsignedRem { a, b } => bv_binop!(a bvurem b),
SymExpr::BuildSignedRem { a, b } => bv_binop!(a bvsrem b),
SymExpr::BuildShiftLeft { a, b } => bv_binop!(a bvshl b),
SymExpr::BuildLogicalShiftRight { a, b } => bv_binop!(a bvlshr b),
SymExpr::BuildArithmeticShiftRight { a, b } => bv_binop!(a bvashr b),
SymExpr::BuildSignedLessThan { a, b } => bv_binop!(a bvslt b),
SymExpr::BuildSignedLessEqual { a, b } => bv_binop!(a bvsle b),
SymExpr::BuildSignedGreaterThan { a, b } => bv_binop!(a bvsgt b),
SymExpr::BuildSignedGreaterEqual { a, b } => bv_binop!(a bvsge b),
SymExpr::BuildUnsignedLessThan { a, b } => bv_binop!(a bvult b),
SymExpr::BuildUnsignedLessEqual { a, b } => bv_binop!(a bvule b),
SymExpr::BuildUnsignedGreaterThan { a, b } => bv_binop!(a bvugt b),
SymExpr::BuildUnsignedGreaterEqual { a, b } => bv_binop!(a bvuge b),
SymExpr::BuildNot { op } => {
let translated = &translation[&op];
Some(if let Some(bv) = translated.as_bv() {
bv.bvnot().into()
} else if let Some(bool) = translated.as_bool() {
bool.not().into()
} else {
panic!(
"unexpected z3 expr of type {:?} when applying not operation",
translated.kind()
)
})
}
SymExpr::BuildEqual { a, b } => Some(translation[&a]._eq(&translation[&b]).into()),
SymExpr::BuildNotEqual { a, b } => {
Some(translation[&a]._eq(&translation[&b]).not().into())
}
SymExpr::BuildBoolAnd { a, b } => Some(Bool::and(&ctx, &[&bool!(a), &bool!(b)]).into()),
SymExpr::BuildBoolOr { a, b } => Some(Bool::or(&ctx, &[&bool!(a), &bool!(b)]).into()),
SymExpr::BuildBoolXor { a, b } => Some(bool!(a).xor(&bool!(b)).into()),
SymExpr::BuildAnd { a, b } => bv_binop!(a bvand b),
SymExpr::BuildOr { a, b } => bv_binop!(a bvor b),
SymExpr::BuildXor { a, b } => bv_binop!(a bvxor b),
SymExpr::BuildSext { op, bits } => Some(bv!(op).sign_ext(u32::from(bits)).into()),
SymExpr::BuildZext { op, bits } => Some(bv!(op).zero_ext(u32::from(bits)).into()),
SymExpr::BuildTrunc { op, bits } => {
Some(bv!(op).extract(u32::from(bits - 1), 0).into())
}
SymExpr::BuildBoolToBits { op, bits } => Some(
bool!(op)
.ite(
&BV::from_u64(&ctx, 1, u32::from(bits)),
&BV::from_u64(&ctx, 0, u32::from(bits)),
)
.into(),
),
SymExpr::ConcatHelper { a, b } => bv_binop!(a concat b),
SymExpr::ExtractHelper {
op,
first_bit,
last_bit,
} => Some(bv!(op).extract(first_bit as u32, last_bit as u32).into()),
SymExpr::BuildExtract {
op,
offset,
length,
little_endian,
} => Some(build_extract(&(bv!(op)), offset, length, little_endian).into()),
SymExpr::BuildBswap { op } => {
let bv = bv!(op);
let bits = bv.get_size();
assert_eq!(
bits % 16,
0,
"bswap is only compatible with an even number of bvytes in the BV"
);
Some(build_extract(&bv, 0, u64::from(bits) / 8, true).into())
}
SymExpr::BuildInsert {
target,
to_insert,
offset,
little_endian,
} => {
let target = bv!(target);
let to_insert = bv!(to_insert);
let bits_to_insert = u64::from(to_insert.get_size());
assert_eq!(bits_to_insert % 8, 0, "can only insert full bytes");
let after_len = (u64::from(target.get_size()) / 8) - offset - (bits_to_insert / 8);
Some(
std::array::IntoIter::new([
if offset == 0 {
None
} else {
Some(build_extract(&target, 0, offset, false))
},
Some(if little_endian {
build_extract(&to_insert, 0, bits_to_insert / 8, true)
} else {
to_insert
}),
if after_len == 0 {
None
} else {
Some(build_extract(
&target,
offset + (bits_to_insert / 8),
after_len,
false,
))
},
])
.reduce(|acc: Option<BV>, val: Option<BV>| match (acc, val) {
(Some(prev), Some(next)) => Some(prev.concat(&next)),
(Some(prev), None) => Some(prev),
(None, next) => next,
})
.unwrap()
.unwrap()
.into(),
)
}
_ => None,
};
if let Some(expr) = z3_expr {
translation.insert(id, expr);
} else if let SymExpr::PushPathConstraint {
constraint,
site_id: _,
taken,
} = msg
{
let op = translation[&constraint].as_bool().unwrap();
let op = if taken { op } else { op.not() }.simplify();
if op.as_bool().is_some() {
// this constraint is useless, as it is always sat or unsat
} else {
let negated_constraint = op.not().simplify();
solver.push();
solver.assert(&negated_constraint);
match solver.check() {
z3::SatResult::Unsat => {
// negation is unsat => no mutation
solver.pop(1);
// check that out path is ever still sat, otherwise, we can stop trying
if matches!(
solver.check(),
z3::SatResult::Unknown | z3::SatResult::Unsat
) {
return res;
}
}
z3::SatResult::Unknown => {
// we've got a problem. ignore
solver.pop(1);
}
z3::SatResult::Sat => {
let model = solver.get_model().unwrap();
let model_string = model.to_string();
let mut replacements = Vec::new();
for l in model_string.lines() {
if let [offset_str, value_str] =
l.split(" -> ").collect::<Vec<_>>().as_slice()
{
let offset = offset_str
.trim_start_matches("k!")
.parse::<usize>()
.unwrap();
let value =
u8::from_str_radix(value_str.trim_start_matches("#x"), 16)
.unwrap();
replacements.push((offset, value));
} else {
panic!();
}
}
res.push(replacements);
solver.pop(1);
}
};
// assert the path constraint
solver.assert(&op);
}
}
}
res
}
/// A mutational stage that uses Z3 to solve concolic constraints attached to the [`Testcase`] by the [`ConcolicTracingStage`].
#[derive(Clone, Debug)]
pub struct SimpleConcolicMutationalStage<C, EM, I, S, Z>
where
I: Input,
C: Corpus<I>,
S: HasClientPerfStats + HasExecutions + HasCorpus<C, I>,
{
_phantom: PhantomData<(C, EM, I, S, Z)>,
}
#[cfg(feature = "concolic_mutation")]
impl<E, C, EM, I, S, Z> Stage<E, EM, S, Z> for SimpleConcolicMutationalStage<C, EM, I, S, Z>
where
I: Input + HasBytesVec,
C: Corpus<I>,
S: HasClientPerfStats + HasExecutions + HasCorpus<C, I>,
Z: Evaluator<E, EM, I, S>,
{
#[inline]
fn perform(
&mut self,
fuzzer: &mut Z,
executor: &mut E,
state: &mut S,
manager: &mut EM,
corpus_idx: usize,
) -> Result<(), Error> {
start_timer!(state);
let testcase = state.corpus().get(corpus_idx)?.clone();
mark_feature_time!(state, PerfFeature::GetInputFromCorpus);
let mutations = if let Some(meta) = testcase.borrow().metadata().get::<ConcolicMetadata>() {
start_timer!(state);
let mutations = generate_mutations(meta.iter_messages());
mark_feature_time!(state, PerfFeature::Mutate);
Some(mutations)
} else {
None
};
if let Some(mutations) = mutations {
let input = { testcase.borrow().input().as_ref().unwrap().clone() };
for mutation in mutations {
let mut input_copy = input.to_owned();
for (index, new_byte) in mutation {
input_copy.bytes_mut()[index] = new_byte;
}
// Time is measured directly the `evaluate_input` function
let _ = fuzzer.evaluate_input(state, executor, manager, input_copy)?;
}
}
Ok(())
}
}
impl<C, EM, I, S, Z> Default for SimpleConcolicMutationalStage<C, EM, I, S, Z>
where
I: Input,
C: Corpus<I>,
S: HasClientPerfStats + HasExecutions + HasCorpus<C, I>,
{
fn default() -> Self {
Self {
_phantom: PhantomData,
}
}
}

View File

@ -18,6 +18,13 @@ pub mod power;
use crate::Error;
pub use power::PowerMutationalStage;
#[cfg(feature = "std")]
pub mod concolic;
#[cfg(feature = "std")]
pub use concolic::ConcolicTracingStage;
#[cfg(feature = "std")]
pub use concolic::SimpleConcolicMutationalStage;
/// A stage is one step in the fuzzing process.
/// Multiple stages will be scheduled one by one for each input.
pub trait Stage<E, EM, S, Z> {

View File

@ -96,6 +96,10 @@ where
phantom: PhantomData,
}
}
pub fn executor(&self) -> &TE {
&self.tracer_executor
}
}
/// A stage that runs the shadow executor using also the shadow observers

View File

@ -0,0 +1,23 @@
[package]
name = "symcc_runtime"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[features]
# skips building and linking the C++ part of the runtime
no-cpp-runtime = []
[dependencies]
unchecked_unwrap = "3"
ctor = "0.1"
libc = "0.2"
libafl = {path = "../../libafl"}
[build-dependencies]
cmake = "0.1"
bindgen = "0.58"
regex = "1"
lazy_static = "1.4"
which = "4.1"

View File

@ -0,0 +1,281 @@
use std::{
env,
fs::File,
io::{stdout, Write},
path::{Path, PathBuf},
process::{exit, Command},
};
use lazy_static::lazy_static;
use regex::{Regex, RegexBuilder};
const SYMCC_REPO_URL: &str = "https://github.com/AFLplusplus/symcc.git";
const SYMCC_REPO_COMMIT: &str = "45cde0269ae22aef4cca2e1fb98c3b24f7bb2984";
const SYMCC_RUNTIME_FUNCTION_NAME_PREFIX: &str = "_cpp_";
lazy_static! {
static ref FUNCTION_NAME_REGEX: Regex = Regex::new(r"pub fn (\w+)").unwrap();
static ref EXPORTED_FUNCTION_REGEX: Regex = RegexBuilder::new(r"(pub fn \w+\([^\)]*\)[^;]*);")
.multi_line(true)
.build()
.unwrap();
}
fn main() {
let out_path = PathBuf::from(env::var("OUT_DIR").unwrap());
let symcc_src_path = checkout_symcc(&out_path);
write_rust_runtime_macro_file(&out_path, &symcc_src_path);
if env::var("TARGET").unwrap().contains("linux") {
let cpp_bindings = bindgen::Builder::default()
.clang_arg(format!(
"-I{}",
symcc_src_path.join("runtime").to_str().unwrap()
))
.clang_arg(format!(
"-I{}",
symcc_src_path
.join("runtime")
.join("rust_backend")
.to_str()
.unwrap()
))
.clang_args(["-x", "c++", "-std=c++17"].iter())
.header(
symcc_src_path
.join("runtime")
.join("rust_backend")
.join("Runtime.h")
.to_str()
.unwrap(),
)
.header(
symcc_src_path
.join("runtime")
.join("LibcWrappers.cpp")
.to_str()
.unwrap(),
)
.allowlist_type("SymExpr")
.allowlist_function("(_sym_.*)|(.*_symbolized)")
.opaque_type("_.*")
.size_t_is_usize(true)
.generate()
.expect("Unable to generate bindings");
write_symcc_runtime_bindings_file(&out_path, &cpp_bindings);
write_cpp_function_export_macro(&out_path, &cpp_bindings);
if std::env::var("CARGO_FEATURE_NO_CPP_RUNTIME").is_err() {
let rename_header_path = out_path.join("rename.h");
write_symcc_rename_header(&rename_header_path, &cpp_bindings);
build_and_link_symcc_runtime(&symcc_src_path, &rename_header_path);
}
} else {
println!("cargo:warning=Building SymCC is only supported on Linux");
}
}
fn write_cpp_function_export_macro(out_path: &Path, cpp_bindings: &bindgen::Bindings) {
let mut macro_file = File::create(out_path.join("cpp_exports_macro.rs")).unwrap();
writeln!(
&mut macro_file,
"#[doc(hidden)]
#[macro_export]
macro_rules! export_cpp_runtime_functions {{
() => {{",
)
.unwrap();
EXPORTED_FUNCTION_REGEX
.captures_iter(&cpp_bindings.to_string())
.for_each(|captures| {
writeln!(
&mut macro_file,
" symcc_runtime::export_c_symbol!({});",
&captures[1]
)
.unwrap();
});
writeln!(
&mut macro_file,
" }};
}}",
)
.unwrap();
}
fn checkout_symcc(out_path: &Path) -> PathBuf {
let repo_dir = out_path.join("libafl_symcc_src");
if repo_dir.exists() {
repo_dir
} else {
build_dep_check(&["git"]);
let mut cmd = Command::new("git");
cmd.arg("clone").arg(SYMCC_REPO_URL).arg(&repo_dir);
let output = cmd.output().expect("failed to execute git clone");
if output.status.success() {
let mut cmd = Command::new("git");
cmd.arg("checkout")
.arg(SYMCC_REPO_COMMIT)
.current_dir(&repo_dir);
let output = cmd.output().expect("failed to execute git checkout");
if output.status.success() {
repo_dir
} else {
println!("failed to checkout symcc git repository commit:");
let mut stdout = stdout();
stdout
.write_all(&output.stderr)
.expect("failed to write git error message to stdout");
exit(1)
}
} else {
println!("failed to clone symcc git repository:");
let mut stdout = stdout();
stdout
.write_all(&output.stderr)
.expect("failed to write git error message to stdout");
exit(1)
}
}
}
fn write_rust_runtime_macro_file(out_path: &Path, symcc_src_path: &Path) {
let rust_bindings = bindgen::Builder::default()
.clang_arg(format!(
"-I{}",
symcc_src_path.join("runtime").to_str().unwrap()
))
.clang_arg(format!(
"-I{}",
symcc_src_path
.join("runtime")
.join("rust_backend")
.to_str()
.unwrap()
))
.clang_args(["-x", "c++", "-std=c++17"].iter())
.header(
symcc_src_path
.join("runtime")
.join("rust_backend")
.join("RustRuntime.h")
.to_str()
.unwrap(),
)
.allowlist_type("RSymExpr")
.allowlist_function("_rsym_.*")
.opaque_type("_.*")
.size_t_is_usize(true)
.generate()
.expect("Unable to generate bindings");
let mut rust_runtime_macro = File::create(out_path.join("rust_exports_macro.rs")).unwrap();
writeln!(
&mut rust_runtime_macro,
"#[doc(hidden)]
#[macro_export]
macro_rules! invoke_macro_with_rust_runtime_exports {{
($macro:path; $($extra_ident:path),*) => {{",
)
.unwrap();
EXPORTED_FUNCTION_REGEX
.captures_iter(&rust_bindings.to_string())
.for_each(|captures| {
writeln!(
&mut rust_runtime_macro,
" $macro!({},{}; $($extra_ident),*);",
&captures[1].replace("_rsym_", ""),
&FUNCTION_NAME_REGEX.captures(&captures[1]).unwrap()[1]
)
.unwrap();
});
writeln!(
&mut rust_runtime_macro,
" }};
}}",
)
.unwrap();
}
fn write_symcc_runtime_bindings_file(out_path: &Path, cpp_bindings: &bindgen::Bindings) {
let mut bindings_file = File::create(out_path.join("bindings.rs")).unwrap();
cpp_bindings.to_string().lines().for_each(|l| {
if let Some(captures) = FUNCTION_NAME_REGEX.captures(l) {
let function_name = &captures[1];
writeln!(
&mut bindings_file,
"#[link_name=\"{}{}\"]",
SYMCC_RUNTIME_FUNCTION_NAME_PREFIX, function_name
)
.unwrap();
}
writeln!(&mut bindings_file, "{}", l).unwrap();
});
}
fn write_symcc_rename_header(rename_header_path: &Path, cpp_bindings: &bindgen::Bindings) {
let mut rename_header_file = File::create(rename_header_path).unwrap();
writeln!(
&mut rename_header_file,
"#ifndef PREFIX_EXPORTS_H
#define PREFIX_EXPORTS_H",
)
.unwrap();
cpp_bindings
.to_string()
.lines()
.filter_map(|l| FUNCTION_NAME_REGEX.captures(l))
.map(|captures| captures[1].to_string())
.for_each(|val| {
writeln!(
&mut rename_header_file,
"#define {} {}{}",
&val, SYMCC_RUNTIME_FUNCTION_NAME_PREFIX, &val
)
.unwrap();
});
writeln!(&mut rename_header_file, "#endif").unwrap();
}
fn build_and_link_symcc_runtime(symcc_src_path: &Path, rename_header_path: &Path) {
build_dep_check(&["cmake"]);
let cpp_lib = cmake::Config::new(symcc_src_path.join("runtime"))
.define("RUST_BACKEND", "ON")
.cxxflag(format!(
"-include \"{}\"",
rename_header_path.to_str().unwrap()
))
.build()
.join("lib");
link_with_cpp_stdlib();
println!("cargo:rustc-link-search=native={}", cpp_lib.display());
println!("cargo:rustc-link-lib=static=SymRuntime");
}
fn link_with_cpp_stdlib() {
let target = env::var("TARGET").unwrap();
if target.contains("apple") {
println!("cargo:rustc-link-lib=dylib=c++");
} else if target.contains("linux") {
println!("cargo:rustc-link-lib=dylib=stdc++");
} else {
unimplemented!();
}
}
fn build_dep_check(tools: &[&str]) {
for tool in tools {
println!("Checking for build tool {}...", tool);
if let Ok(path) = which::which(tool) {
println!("Found build tool {}", path.to_str().unwrap());
} else {
println!("ERROR: missing build tool {}", tool);
exit(1);
};
}
}

View File

@ -0,0 +1,402 @@
use std::collections::HashSet;
#[allow(clippy::wildcard_imports)]
use crate::*;
macro_rules! rust_filter_function_declaration {
(pub fn expression_unreachable(expressions: *mut RSymExpr, num_elements: usize), $c_name:ident;) => {
};
(pub fn push_path_constraint($( $arg:ident : $type:ty ),*$(,)?), $c_name:ident;) => {
#[allow(unused_variables)]
fn push_path_constraint(&mut self, $($arg : $type),*) -> bool {
true
}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?) -> $ret:ty, $c_name:ident;) => {
#[allow(unused_variables)]
fn $name(&mut self, $( $arg : $type),*) -> bool {true}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?), $c_name:ident;) => {
#[allow(unused_variables)]
fn $name(&mut self, $( $arg : $type),*) {}
};
}
/// A [`Filter`] can decide for each expression whether the expression should be trace symbolically or be
/// concretized. This allows to implement filtering mechanisms that reduce the amount of traced expressions by
/// concretizing uninteresting expressions.
/// If a filter concretizes an expression that would have later been used as part of another expression that
/// is still symbolic, a concrete instead of a symbolic value is received.
///
/// For example:
/// Suppose there are symbolic expressions `a` and `b`. Expression `a` is concretized, `b` is still symbolic. If an add
/// operation between `a` and `b` is encountered, it will receive `a`'s concrete value and `b` as a symbolic expression.
///
/// An expression filter also receives code locations (`visit_*` methods) as they are visited in between operations
/// and these code locations are typically used to decide whether an expression should be concretized.
pub trait Filter {
invoke_macro_with_rust_runtime_exports!(rust_filter_function_declaration;);
}
/// A `FilterRuntime` wraps a [`Runtime`] with a [`Filter`], applying the filter before passing expressions to the inner
/// runtime.
/// It also implements [`Runtime`], allowing for composing multiple [`Filter`]'s in a chain.
#[allow(clippy::module_name_repetitions)]
pub struct FilterRuntime<F, RT> {
filter: F,
runtime: RT,
}
impl<F, RT> FilterRuntime<F, RT> {
pub fn new(filter: F, runtime: RT) -> Self {
Self { filter, runtime }
}
}
macro_rules! rust_filter_function_implementation {
(pub fn expression_unreachable(expressions: *mut RSymExpr, num_elements: usize), $c_name:ident;) => {
fn expression_unreachable(&mut self, exprs: &[RSymExpr]) {
self.runtime.expression_unreachable(exprs)
}
};
(pub fn push_path_constraint($( $arg:ident : $type:ty ),*$(,)?), $c_name:ident;) => {
fn push_path_constraint(&mut self, $($arg : $type),*) {
if self.filter.push_path_constraint($($arg),*) {
self.runtime.push_path_constraint($($arg),*)
}
}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?) -> $ret:ty, $c_name:ident;) => {
fn $name(&mut self, $($arg : $type),*) -> Option<$ret> {
if self.filter.$name($($arg),*) {
self.runtime.$name($($arg),*)
} else {
None
}
}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?), $c_name:ident;) => {
fn $name(&mut self, $( $arg : $type),*) {
self.filter.$name($($arg),*);
self.runtime.$name($($arg),*);
}
};
}
impl<F, RT> Runtime for FilterRuntime<F, RT>
where
F: Filter,
RT: Runtime,
{
invoke_macro_with_rust_runtime_exports!(rust_filter_function_implementation;);
}
/// A [`Filter`] that concretizes all input byte expressions that are not included in a predetermined set of
/// of input byte offsets.
pub struct SelectiveSymbolication {
bytes_to_symbolize: HashSet<usize>,
}
impl SelectiveSymbolication {
#[must_use]
pub fn new(offset: HashSet<usize>) -> Self {
Self {
bytes_to_symbolize: offset,
}
}
}
impl Filter for SelectiveSymbolication {
fn get_input_byte(&mut self, offset: usize) -> bool {
self.bytes_to_symbolize.contains(&offset)
}
}
/// Concretizes all floating point operations.
pub struct NoFloat;
impl Filter for NoFloat {
fn build_float(&mut self, _value: f64, _is_double: bool) -> bool {
false
}
fn build_float_ordered(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_ordered_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_ordered_greater_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_ordered_greater_than(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_ordered_less_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_ordered_less_than(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_ordered_not_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_to_bits(&mut self, _expr: RSymExpr) -> bool {
false
}
fn build_float_to_float(&mut self, _expr: RSymExpr, _to_double: bool) -> bool {
false
}
fn build_float_to_signed_integer(&mut self, _expr: RSymExpr, _bits: u8) -> bool {
false
}
fn build_float_to_unsigned_integer(&mut self, _expr: RSymExpr, _bits: u8) -> bool {
false
}
fn build_float_unordered(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_unordered_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_unordered_greater_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_unordered_greater_than(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_unordered_less_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_unordered_less_than(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_float_unordered_not_equal(&mut self, _a: RSymExpr, _b: RSymExpr) -> bool {
false
}
fn build_int_to_float(&mut self, _value: RSymExpr, _is_double: bool, _is_signed: bool) -> bool {
false
}
fn build_bits_to_float(&mut self, _expr: RSymExpr, _to_double: bool) -> bool {
false
}
}
pub mod coverage {
use std::{
collections::hash_map::DefaultHasher,
convert::TryInto,
hash::{BuildHasher, BuildHasherDefault, Hash, Hasher},
marker::PhantomData,
};
use libafl::bolts::shmem::ShMem;
use super::Filter;
const MAP_SIZE: usize = 65536;
/// A coverage-based [`Filter`] based on the expression pruning from [`QSym`](https://github.com/sslab-gatech/qsym)
/// [here](https://github.com/sslab-gatech/qsym/blob/master/qsym/pintool/call_stack_manager.cpp).
struct CallStackCoverage<
THasher: Hasher = DefaultHasher,
THashBuilder: BuildHasher = BuildHasherDefault<THasher>,
> {
call_stack: Vec<usize>,
call_stack_hash: u64,
is_interesting: bool,
bitmap: Vec<u16>,
pending: bool,
last_location: usize,
hasher_builder: THashBuilder,
hasher_phantom: PhantomData<THasher>,
}
impl Default for CallStackCoverage<DefaultHasher, BuildHasherDefault<DefaultHasher>> {
fn default() -> Self {
Self {
call_stack: Vec::new(),
call_stack_hash: 0,
is_interesting: true,
bitmap: vec![0; MAP_SIZE],
pending: false,
last_location: 0,
hasher_builder: BuildHasherDefault::default(),
hasher_phantom: PhantomData,
}
}
}
impl<THasher: Hasher, THashBuilder: BuildHasher> CallStackCoverage<THasher, THashBuilder> {
pub fn visit_call(&mut self, location: usize) {
self.call_stack.push(location);
self.update_call_stack_hash();
}
pub fn visit_ret(&mut self, location: usize) {
if self.call_stack.is_empty() {
return;
}
let num_elements_to_remove = self
.call_stack
.iter()
.rev()
.take_while(|&&loc| loc != location)
.count()
+ 1;
self.call_stack
.truncate(self.call_stack.len() - num_elements_to_remove);
self.update_call_stack_hash();
}
pub fn visit_basic_block(&mut self, location: usize) {
self.last_location = location;
self.pending = true;
}
pub fn is_interesting(&self) -> bool {
self.is_interesting
}
pub fn update_bitmap(&mut self) {
if self.pending {
self.pending = false;
let mut hasher = self.hasher_builder.build_hasher();
self.last_location.hash(&mut hasher);
self.call_stack_hash.hash(&mut hasher);
let hash = hasher.finish();
let index: usize = (hash % MAP_SIZE as u64).try_into().unwrap();
let value = self.bitmap[index] / 8;
self.is_interesting = value == 0 || value.is_power_of_two();
*self.bitmap.get_mut(index).unwrap() += 1;
}
}
fn update_call_stack_hash(&mut self) {
let mut hasher = self.hasher_builder.build_hasher();
self.call_stack
.iter()
.for_each(|&loc| loc.hash(&mut hasher));
self.call_stack_hash = hasher.finish();
}
}
macro_rules! call_stack_coverage_filter_function_implementation {
(pub fn expression_unreachable(expressions: *mut RSymExpr, num_elements: usize), $c_name:ident;) => {
};
(pub fn notify_basic_block(site_id: usize), $c_name:ident;) => {
fn notify_basic_block(&mut self, site_id: usize) {
self.visit_basic_block(site_id);
}
};
(pub fn notify_call(site_id: usize), $c_name:ident;) => {
fn notify_call(&mut self, site_id: usize) {
self.visit_call(site_id);
}
};
(pub fn notify_ret(site_id: usize), $c_name:ident;) => {
fn notify_ret(&mut self, site_id: usize) {
self.visit_ret(site_id);
}
};
(pub fn push_path_constraint($( $arg:ident : $type:ty ),*$(,)?), $c_name:ident;) => {
fn push_path_constraint(&mut self, $( _ : $type ),*) -> bool {
self.update_bitmap();
self.is_interesting()
}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?) -> $ret:ty, $c_name:ident;) => {
fn $name(&mut self, $( _ : $type),*) -> bool {
self.update_bitmap();
self.is_interesting()
}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?), $c_name:ident;) => {
fn $name(&mut self, $( _ : $type),*) {
}
};
}
#[allow(clippy::wildcard_imports)]
use crate::*;
impl<THasher: Hasher, THashBuilder: BuildHasher> Filter
for CallStackCoverage<THasher, THashBuilder>
{
invoke_macro_with_rust_runtime_exports!(call_stack_coverage_filter_function_implementation;);
}
/// A [`Filter`] that just observers Basic Block locations and updates a given Hitmap as a [`ShMem`].
pub struct HitmapFilter<M, BH: BuildHasher = BuildHasherDefault<DefaultHasher>> {
hitcounts_map: M,
build_hasher: BH,
}
impl<M> HitmapFilter<M, BuildHasherDefault<DefaultHasher>>
where
M: ShMem,
{
/// Creates a new `HitmapFilter` using the given map and the [`DefaultHasher`].
pub fn new(hitcounts_map: M) -> Self {
Self::new_with_default_hasher_builder(hitcounts_map)
}
}
impl<M, H> HitmapFilter<M, BuildHasherDefault<H>>
where
M: ShMem,
H: Hasher + Default,
{
/// Creates a new `HitmapFilter` using the given map and [`Hasher`] (as type argument) using the [`BuildHasherDefault`].
pub fn new_with_default_hasher_builder(hitcounts_map: M) -> Self {
Self::new_with_build_hasher(hitcounts_map, BuildHasherDefault::default())
}
}
impl<M, BH> HitmapFilter<M, BH>
where
M: ShMem,
BH: BuildHasher,
{
/// Creates a new `HitmapFilter` using the given map and [`BuildHasher`] (as type argument).
pub fn new_with_build_hasher(hitcounts_map: M, build_hasher: BH) -> Self {
Self {
hitcounts_map,
build_hasher,
}
}
fn register_location_on_hitmap(&mut self, location: usize) {
let mut hasher = self.build_hasher.build_hasher();
location.hash(&mut hasher);
let hash = (hasher.finish() % usize::MAX as u64) as usize;
let val = unsafe {
// SAFETY: the index is modulo by the length, therefore it is always in bounds
let len = self.hitcounts_map.len();
self.hitcounts_map.map_mut().get_unchecked_mut(hash % len)
};
*val = val.saturating_add(1);
}
}
impl<M, BH> Filter for HitmapFilter<M, BH>
where
M: ShMem,
BH: BuildHasher,
{
fn notify_basic_block(&mut self, location_id: usize) {
self.register_location_on_hitmap(location_id);
}
}
}

View File

@ -0,0 +1,194 @@
pub mod filter;
pub mod tracing;
#[doc(hidden)]
#[cfg(target_os = "linux")]
pub mod cpp_runtime {
#![allow(non_upper_case_globals)]
#![allow(non_camel_case_types)]
#![allow(non_snake_case)]
include!(concat!(env!("OUT_DIR"), "/bindings.rs"));
}
#[doc(hidden)]
pub use ctor::ctor;
use libafl::observers::concolic;
#[doc(hidden)]
pub use libc::atexit;
#[doc(hidden)]
pub use unchecked_unwrap;
#[doc(hidden)]
#[cfg(target_os = "linux")]
#[macro_export]
macro_rules! export_c_symbol {
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?) -> $ret:ty) => {
use $crate::cpp_runtime::*;
#[no_mangle]
pub unsafe extern "C" fn $name($( $arg : $type),*) -> $ret {
$crate::cpp_runtime::$name($( $arg ),*)
}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),* $(,)?)) => {
$crate::export_c_symbol!(pub fn $name($( $arg : $type),*) -> ());
}
}
#[cfg(target_os = "linux")]
include!(concat!(env!("OUT_DIR"), "/cpp_exports_macro.rs"));
include!(concat!(env!("OUT_DIR"), "/rust_exports_macro.rs"));
macro_rules! rust_runtime_function_declaration {
(pub fn expression_unreachable(expressions: *mut RSymExpr, num_elements: usize), $c_name:ident;) => {
fn expression_unreachable(&mut self, exprs: &[RSymExpr]);
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?)$( -> $ret:ty)?, $c_name:ident;) => {
fn $name(&mut self, $( $arg : $type),*)$( -> Option<$ret>)?;
};
}
pub type RSymExpr = concolic::SymExprRef;
pub trait Runtime {
invoke_macro_with_rust_runtime_exports!(rust_runtime_function_declaration;);
}
#[doc(hidden)]
#[macro_export]
macro_rules! make_symexpr_optional {
(RSymExpr) => {Option<RSymExpr>};
($($type:tt)+) => {$($type)+};
}
#[doc(hidden)]
#[macro_export]
macro_rules! unwrap_option {
($param_name:ident: RSymExpr) => {
$param_name?
};
($param_name:ident: $($type:tt)+) => {
$param_name
};
}
#[doc(hidden)]
#[macro_export]
macro_rules! export_rust_runtime_fn {
(pub fn expression_unreachable(expressions: *mut RSymExpr, num_elements: usize), $c_name:ident; $rt_cb:path) => {
#[allow(clippy::missing_safety_doc)]
#[no_mangle]
pub unsafe extern "C" fn _rsym_expression_unreachable(expressions: *mut RSymExpr, num_elements: usize) {
let slice = core::slice::from_raw_parts(expressions, num_elements);
$rt_cb(|rt| {
rt.expression_unreachable(slice);
})
}
};
(pub fn push_path_constraint(constraint: RSymExpr, taken: bool, site_id: usize), $c_name:ident; $rt_cb:path) => {
#[allow(clippy::missing_safety_doc)]
#[no_mangle]
pub unsafe extern "C" fn _rsym_push_path_constraint(constraint: Option<RSymExpr>, taken: bool, site_id: usize) {
if let Some(constraint) = constraint {
$rt_cb(|rt| {
rt.push_path_constraint(constraint, taken, site_id);
})
}
}
};
(pub fn $name:ident($( $arg:ident : $(::)?$($type:ident)::+ ),*$(,)?)$( -> $($ret:ident)::+)?, $c_name:ident; $rt_cb:path) => {
#[allow(clippy::missing_safety_doc)]
#[no_mangle]
pub unsafe extern "C" fn $c_name( $($arg: $crate::make_symexpr_optional!($($type)::+),)* )$( -> $crate::make_symexpr_optional!($($ret)::+))? {
$rt_cb(|rt| {
$(let $arg = $crate::unwrap_option!($arg: $($type)::+);)*
rt.$name($($arg,)*)
})
}
};
}
macro_rules! impl_nop_runtime_fn {
(pub fn expression_unreachable(expressions: *mut RSymExpr, num_elements: usize), $c_name:ident;) => {
#[allow(clippy::default_trait_access)]
fn expression_unreachable(&mut self, _exprs: &[RSymExpr]) {std::default::Default::default()}
};
(pub fn $name:ident($( $arg:ident : $type:ty ),*$(,)?)$( -> $ret:ty)?, $c_name:ident;) => {
#[allow(clippy::default_trait_access)]
fn $name(&mut self, $( _ : $type),*)$( -> Option<$ret>)? {std::default::Default::default()}
};
}
pub struct NopRuntime;
impl Runtime for NopRuntime {
invoke_macro_with_rust_runtime_exports!(impl_nop_runtime_fn;);
}
#[macro_export]
macro_rules! export_runtime {
($filter_constructor:expr => $filter:ty ; $($constructor:expr => $rt:ty);+) => {
export_runtime!(@final export_runtime!(@combine_constructor $filter_constructor; $($constructor);+) => export_runtime!(@combine_type $filter; $($rt);+));
};
($constructor:expr => $rt:ty) => {
export_runtime!(@final $constructor => $rt);
};
(@combine_constructor $filter_constructor:expr ; $($constructor:expr);+) => {
$crate::filter::FilterRuntime::new($filter_constructor, export_runtime!(@combine_constructor $($constructor);+))
};
(@combine_constructor $constructor:expr) => {
$constructor
};
(@combine_type $filter:ty ; $($rt:ty);+) => {
$crate::filter::FilterRuntime<$filter, export_runtime!(@combine_type $($rt);+)>
};
(@combine_type $rt:ty) => {
$rt
};
(@final $constructor:expr => $rt:ty) => {
// We are creating a piece of shared mutable state here for our runtime, which is used unsafely.
// The correct solution here would be to either use a mutex or have per-thread state,
// however, this is not really supported in SymCC yet.
// Therefore we make the assumption that there is only ever a single thread, which should
// mean that this is 'safe'.
static mut GLOBAL_DATA: Option<$rt> = None;
#[cfg(not(test))]
#[$crate::ctor]
fn init() {
// See comment on GLOBAL_DATA declaration.
unsafe {
GLOBAL_DATA = Some($constructor);
$crate::atexit(fini);
}
}
/// [`libc::atexit`] handler
extern "C" fn fini() {
// drops the global data object
unsafe {
if let Some(state) = GLOBAL_DATA.take() {
}
}
}
use $crate::RSymExpr;
/// A little helper function that encapsulates access to the shared mutable state.
fn with_state<R>(cb: impl FnOnce(&mut $rt) -> R) -> R {
use $crate::unchecked_unwrap::UncheckedUnwrap;
let s = unsafe { GLOBAL_DATA.as_mut().unchecked_unwrap() };
cb(s)
}
$crate::invoke_macro_with_rust_runtime_exports!($crate::export_rust_runtime_fn;with_state);
#[cfg(target_os="linux")]
$crate::export_cpp_runtime_functions!();
};
}

View File

@ -0,0 +1,176 @@
pub use libafl::observers::concolic::{
serialization_format::shared_memory::StdShMemMessageFileWriter, SymExpr,
};
use crate::{RSymExpr, Runtime};
pub struct TracingRuntime {
writer: StdShMemMessageFileWriter,
}
impl TracingRuntime {
#[must_use]
pub fn new(writer: StdShMemMessageFileWriter) -> Self {
Self { writer }
}
#[allow(clippy::unnecessary_wraps)]
fn write_message(&mut self, message: SymExpr) -> Option<RSymExpr> {
Some(self.writer.write_message(message).unwrap())
}
}
/// A macro to generate the boilerplate for declaring a runtime function for SymCC that simply logs the function call
/// according to [`concolic::SymExpr`].
macro_rules! expression_builder {
($method_name:ident ( $($param_name:ident : $param_type:ty ),+ ) => $message:ident) => {
#[allow(clippy::missing_safety_doc)]
#[no_mangle]
fn $method_name(&mut self, $( $param_name : $param_type, )+ ) -> Option<RSymExpr> {
self.write_message(SymExpr::$message { $($param_name,)+ })
}
};
($method_name:ident () => $message:ident) => {
#[allow(clippy::missing_safety_doc)]
#[no_mangle]
fn $method_name(&mut self) -> Option<RSymExpr> {
self.write_message(SymExpr::$message)
}
};
}
macro_rules! unary_expression_builder {
($c_name:ident, $message:ident) => {
expression_builder!($c_name(op: RSymExpr) => $message);
};
}
macro_rules! binary_expression_builder {
($c_name:ident, $message:ident) => {
expression_builder!($c_name(a: RSymExpr, b: RSymExpr) => $message);
};
}
impl Runtime for TracingRuntime {
expression_builder!(get_input_byte(offset: usize) => GetInputByte);
expression_builder!(build_integer(value: u64, bits: u8) => BuildInteger);
expression_builder!(build_integer128(high: u64, low: u64) => BuildInteger128);
expression_builder!(build_float(value: f64, is_double: bool) => BuildFloat);
expression_builder!(build_null_pointer() => BuildNullPointer);
expression_builder!(build_true() => BuildTrue);
expression_builder!(build_false() => BuildFalse);
expression_builder!(build_bool(value: bool) => BuildBool);
unary_expression_builder!(build_neg, BuildNeg);
binary_expression_builder!(build_add, BuildAdd);
binary_expression_builder!(build_sub, BuildSub);
binary_expression_builder!(build_mul, BuildMul);
binary_expression_builder!(build_unsigned_div, BuildUnsignedDiv);
binary_expression_builder!(build_signed_div, BuildSignedDiv);
binary_expression_builder!(build_unsigned_rem, BuildUnsignedRem);
binary_expression_builder!(build_signed_rem, BuildSignedRem);
binary_expression_builder!(build_shift_left, BuildShiftLeft);
binary_expression_builder!(build_logical_shift_right, BuildLogicalShiftRight);
binary_expression_builder!(build_arithmetic_shift_right, BuildArithmeticShiftRight);
binary_expression_builder!(build_signed_less_than, BuildSignedLessThan);
binary_expression_builder!(build_signed_less_equal, BuildSignedLessEqual);
binary_expression_builder!(build_signed_greater_than, BuildSignedGreaterThan);
binary_expression_builder!(build_signed_greater_equal, BuildSignedGreaterEqual);
binary_expression_builder!(build_unsigned_less_than, BuildUnsignedLessThan);
binary_expression_builder!(build_unsigned_less_equal, BuildUnsignedLessEqual);
binary_expression_builder!(build_unsigned_greater_than, BuildUnsignedGreaterThan);
binary_expression_builder!(build_unsigned_greater_equal, BuildUnsignedGreaterEqual);
binary_expression_builder!(build_and, BuildAnd);
binary_expression_builder!(build_or, BuildOr);
binary_expression_builder!(build_xor, BuildXor);
binary_expression_builder!(build_float_ordered, BuildFloatOrdered);
binary_expression_builder!(
build_float_ordered_greater_than,
BuildFloatOrderedGreaterThan
);
binary_expression_builder!(
build_float_ordered_greater_equal,
BuildFloatOrderedGreaterEqual
);
binary_expression_builder!(build_float_ordered_less_than, BuildFloatOrderedLessThan);
binary_expression_builder!(build_float_ordered_less_equal, BuildFloatOrderedLessEqual);
binary_expression_builder!(build_float_ordered_equal, BuildFloatOrderedEqual);
binary_expression_builder!(build_float_ordered_not_equal, BuildFloatOrderedNotEqual);
binary_expression_builder!(build_float_unordered, BuildFloatUnordered);
binary_expression_builder!(
build_float_unordered_greater_than,
BuildFloatUnorderedGreaterThan
);
binary_expression_builder!(
build_float_unordered_greater_equal,
BuildFloatUnorderedGreaterEqual
);
binary_expression_builder!(build_float_unordered_less_than, BuildFloatUnorderedLessThan);
binary_expression_builder!(
build_float_unordered_less_equal,
BuildFloatUnorderedLessEqual
);
binary_expression_builder!(build_float_unordered_equal, BuildFloatUnorderedEqual);
binary_expression_builder!(build_float_unordered_not_equal, BuildFloatUnorderedNotEqual);
binary_expression_builder!(build_fp_add, BuildFloatAdd);
binary_expression_builder!(build_fp_sub, BuildFloatSub);
binary_expression_builder!(build_fp_mul, BuildFloatMul);
binary_expression_builder!(build_fp_div, BuildFloatDiv);
binary_expression_builder!(build_fp_rem, BuildFloatRem);
unary_expression_builder!(build_fp_abs, BuildFloatAbs);
unary_expression_builder!(build_not, BuildNot);
binary_expression_builder!(build_equal, BuildEqual);
binary_expression_builder!(build_not_equal, BuildNotEqual);
binary_expression_builder!(build_bool_and, BuildBoolAnd);
binary_expression_builder!(build_bool_or, BuildBoolOr);
binary_expression_builder!(build_bool_xor, BuildBoolXor);
expression_builder!(build_sext(op: RSymExpr, bits: u8) => BuildSext);
expression_builder!(build_zext(op: RSymExpr, bits: u8) => BuildZext);
expression_builder!(build_trunc(op: RSymExpr, bits: u8) => BuildTrunc);
expression_builder!(build_int_to_float(op: RSymExpr, is_double: bool, is_signed: bool) => BuildIntToFloat);
expression_builder!(build_float_to_float(op: RSymExpr, to_double: bool) => BuildFloatToFloat);
expression_builder!(build_bits_to_float(op: RSymExpr, to_double: bool) => BuildBitsToFloat);
expression_builder!(build_float_to_bits(op: RSymExpr) => BuildFloatToBits);
expression_builder!(build_float_to_signed_integer(op: RSymExpr, bits: u8) => BuildFloatToSignedInteger);
expression_builder!(build_float_to_unsigned_integer(op: RSymExpr, bits: u8) => BuildFloatToUnsignedInteger);
expression_builder!(build_bool_to_bits(op: RSymExpr, bits: u8) => BuildBoolToBits);
binary_expression_builder!(concat_helper, ConcatHelper);
expression_builder!(extract_helper(op: RSymExpr, first_bit:usize, last_bit:usize) => ExtractHelper);
fn notify_call(&mut self, _site_id: usize) {}
fn notify_ret(&mut self, _site_id: usize) {}
fn notify_basic_block(&mut self, _site_id: usize) {}
fn expression_unreachable(&mut self, exprs: &[RSymExpr]) {
self.write_message(SymExpr::ExpressionsUnreachable {
exprs: exprs.to_owned(),
});
}
fn push_path_constraint(&mut self, constraint: RSymExpr, taken: bool, site_id: usize) {
self.write_message(SymExpr::PushPathConstraint {
constraint,
taken,
site_id,
});
}
}
impl Drop for TracingRuntime {
fn drop(&mut self) {
self.writer.end().expect("failed to shut down writer");
}
}

7
libafl_concolic/test/.gitignore vendored Normal file
View File

@ -0,0 +1,7 @@
symcc
symcc_build
symqemu_build
if
constraints.txt
constraints_filtered.txt
expected_constraints_filtered.txt

View File

@ -0,0 +1,10 @@
[package]
name = "dump_constraints"
version = "0.1.0"
edition = "2018"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
libafl = {path = "../../../libafl"}
structopt = "0.3.21"

View File

@ -0,0 +1,150 @@
use std::{
ffi::OsString,
fs::File,
io::{BufWriter, Write},
path::PathBuf,
process::{exit, Command},
string::ToString,
};
use structopt::StructOpt;
use libafl::{
bolts::shmem::{ShMem, ShMemProvider, StdShMemProvider},
observers::concolic::{
serialization_format::{
shared_memory::DEFAULT_ENV_NAME, MessageFileReader, MessageFileWriter,
},
SymExpr, EXPRESSION_PRUNING, HITMAP_ENV_NAME, NO_FLOAT_ENV_NAME,
SELECTIVE_SYMBOLICATION_ENV_NAME,
},
};
#[derive(Debug, StructOpt)]
#[structopt(
name = "dump_constraints",
about = "Dump tool for concolic constraints."
)]
struct Opt {
/// Outputs plain text instead of binary
#[structopt(short, long)]
plain_text: bool,
/// Outputs coverage information to the given file
#[structopt(short, long)]
coverage_file: Option<PathBuf>,
/// Symbolizes only the given input file offsets.
#[structopt(short, long)]
symbolize_offsets: Option<Vec<usize>>,
/// Concretize all floating point operations.
#[structopt(long)]
no_float: bool,
/// Prune expressions from high-frequency code locations.
#[structopt(long)]
prune: bool,
/// Trace file path, "trace" by default.
#[structopt(parse(from_os_str), short, long)]
output: Option<PathBuf>,
/// Target program and arguments
#[structopt(last = true)]
program: Vec<OsString>,
}
fn main() {
const COVERAGE_MAP_SIZE: usize = 65536;
let opt = Opt::from_args();
let mut shmemprovider = StdShMemProvider::default();
let concolic_shmem = shmemprovider
.new_map(1024 * 1024 * 1024)
.expect("unable to create shared mapping");
concolic_shmem
.write_to_env(DEFAULT_ENV_NAME)
.expect("unable to write shared mapping info to environment");
let coverage_map = StdShMemProvider::new()
.unwrap()
.new_map(COVERAGE_MAP_SIZE)
.unwrap();
//let the forkserver know the shmid
coverage_map.write_to_env(HITMAP_ENV_NAME).unwrap();
if let Some(symbolize_offsets) = opt.symbolize_offsets {
std::env::set_var(
SELECTIVE_SYMBOLICATION_ENV_NAME,
symbolize_offsets
.iter()
.map(ToString::to_string)
.collect::<Vec<_>>()
.join(","),
);
}
if opt.no_float {
std::env::set_var(NO_FLOAT_ENV_NAME, "1");
}
if opt.prune {
std::env::set_var(EXPRESSION_PRUNING, "1");
}
let res = Command::new(&opt.program.first().expect("no program argument given"))
.args(opt.program.iter().skip(1))
.status()
.expect("failed to spawn program");
{
if let Some(coverage_file_path) = opt.coverage_file {
let mut f = BufWriter::new(
File::create(coverage_file_path).expect("unable to open coverage file"),
);
for (index, count) in coverage_map
.map()
.iter()
.enumerate()
.filter(|(_, &v)| v != 0)
{
writeln!(&mut f, "{}\t{}", index, count).expect("failed to write coverage file");
}
}
// open a new scope to ensure our ressources get dropped before the exit call at the end
let output_file_path = opt.output.unwrap_or_else(|| "trace".into());
let mut output_file =
BufWriter::new(File::create(output_file_path).expect("unable to open output file"));
let mut reader = MessageFileReader::from_length_prefixed_buffer(concolic_shmem.map())
.expect("unable to create trace reader");
if opt.plain_text {
while let Some(message) = reader.next_message() {
if let Ok((id, message)) = message {
writeln!(&mut output_file, "{}\t{:?}", id, message)
.expect("failed to write to output file");
} else {
break;
}
}
} else {
let mut writer =
MessageFileWriter::from_writer(output_file).expect("unable to create trace writer");
while let Some(message) = reader.next_message() {
if let Ok((_, message)) = message {
writer
.write_message(message)
.expect("unable to write message");
} else {
break;
}
}
writer
.write_message(SymExpr::End)
.expect("unable to write end message");
}
}
exit(res.code().expect("failed to get exit code from program"));
}

View File

@ -0,0 +1,30 @@
1 GetInputByte { offset: 0 }
2 GetInputByte { offset: 1 }
3 GetInputByte { offset: 2 }
4 GetInputByte { offset: 3 }
5 ConcatHelper { a: 2, b: 1 }
6 ConcatHelper { a: 3, b: 5 }
7 ConcatHelper { a: 4, b: 6 }
8 ConcatHelper { a: 2, b: 1 }
9 ConcatHelper { a: 3, b: 8 }
10 ConcatHelper { a: 4, b: 9 }
11 ExtractHelper { op: 10, first_bit: 7, last_bit: 0 }
12 ExtractHelper { op: 10, first_bit: 15, last_bit: 8 }
13 ExtractHelper { op: 10, first_bit: 23, last_bit: 16 }
14 ExtractHelper { op: 10, first_bit: 31, last_bit: 24 }
15 ConcatHelper { a: 12, b: 11 }
16 ConcatHelper { a: 13, b: 15 }
17 ConcatHelper { a: 14, b: 16 }
18 BuildInteger { value: 2, bits: 32 }
19 BuildMul { a: 18, b: 17 }
20 BuildInteger { value: 7, bits: 32 }
21 BuildSignedLessThan { a: 19, b: 20 }
22 PushPathConstraint { constraint: 21, taken: false, site_id: 11229456 }
22 ConcatHelper { a: 12, b: 11 }
23 ConcatHelper { a: 13, b: 22 }
24 ConcatHelper { a: 14, b: 23 }
25 BuildInteger { value: 7, bits: 32 }
26 BuildSignedRem { a: 24, b: 25 }
27 BuildInteger { value: 0, bits: 32 }
28 BuildNotEqual { a: 26, b: 27 }
29 PushPathConstraint { constraint: 28, taken: true, site_id: 11122032 }

View File

@ -0,0 +1 @@
1234

View File

@ -0,0 +1,13 @@
[package]
name = "runtime_test"
version = "0.1.0"
edition = "2018"
[lib]
crate-type = ["cdylib"]
name = "SymRuntime"
# See more keys and their definitions at https://doc.rust-lang.org/cargo/reference/manifest.html
[dependencies]
symcc_runtime = { path = "../../symcc_runtime" }

View File

@ -0,0 +1,15 @@
use symcc_runtime::{
export_runtime,
filter::NoFloat,
tracing::{self, StdShMemMessageFileWriter},
Runtime,
};
export_runtime!(
NoFloat => NoFloat;
tracing::TracingRuntime::new(
StdShMemMessageFileWriter::from_stdshmem_default_env()
.expect("unable to construct tracing runtime writer. (missing env?)")
)
=> tracing::TracingRuntime
);

View File

@ -0,0 +1,52 @@
#!/bin/bash
set -eux;
SCRIPT_DIR="$( cd "$( dirname "${BASH_SOURCE[0]}" )" &> /dev/null && pwd )"
cd "$SCRIPT_DIR"
# this test intends to ...
# 1. compile symcc with the rust/tracing backend
# 2. compile a program using this symcc
# 3. run the program, capturing constraints
# 4. print the constraints in human readable form for verification
# 5. check that the captured constraints match those that we expect
# clone symcc
if [ ! -d "symcc" ]; then
echo "cloning symcc"
git clone https://github.com/AFLplusplus/symcc.git symcc
cd symcc
git checkout 45cde0269ae22aef4cca2e1fb98c3b24f7bb2984
cd ..
fi
if [ ! -d "symcc_build" ]; then
echo "building symcc"
mkdir symcc_build
cd symcc_build
cmake -G Ninja -DZ3_TRUST_SYSTEM_VERSION=on ../symcc
ninja
cd ..
fi
echo "building runtime"
cargo build -p runtime_test
echo "building dump_constraints"
cargo build -p dump_constraints
echo "building target"
SYMCC_RUNTIME_DIR=../../target/debug symcc_build/symcc symcc/test/if.c -o "if"
echo "running target with dump_constraints"
cargo run -p dump_constraints -- --plain-text --output constraints.txt -- ./if < if_test_input
echo "constraints: "
cat constraints.txt
# site_id's in the constraints trace will differ for every run. we therefore filter those.
sed 's/, site_id: .* / /' < constraints.txt > constraints_filtered.txt
sed 's/, site_id: .* / /' < expected_constraints.txt > expected_constraints_filtered.txt
diff constraints_filtered.txt expected_constraints_filtered.txt

View File

@ -0,0 +1,5 @@
#!/bin/bash
# intends to install build dependencies for the smoke test on ubuntu
set -eux;
apt install -y clang cmake llvm-dev ninja-build pkg-config zlib1g-dev