Add doc for example baby_fuzzer_*
(#564)
* Add doc for example `baby_fuzzer_*` * Fix `mdbook build` Co-authored-by: syheliel <syheliel>
This commit is contained in:
parent
8eab7d6063
commit
6b95361123
@ -9,8 +9,8 @@
|
||||
- [Build](./getting_started/build.md)
|
||||
- [Crates](./getting_started/crates.md)
|
||||
|
||||
- [Baby Fuzzer](./baby_fuzzer.md)
|
||||
|
||||
- [Baby Fuzzer](./baby_fuzzer/baby_fuzzer.md)
|
||||
- [More Examples](./baby_fuzzer/more_examples.md)
|
||||
- [Core Concepts](./core_concepts/core_concepts.md)
|
||||
- [Observer](./core_concepts/observer.md)
|
||||
- [Executor](./core_concepts/executor.md)
|
||||
|
@ -11,7 +11,7 @@ You can find a complete version of this tutorial as an example fuzzer in [`fuzze
|
||||
> ### Warning
|
||||
>
|
||||
> This example fuzzer is too naive for any real-world usage.
|
||||
> Its purpose is solely to show the main components of the library, for a more in-depth walkthrough on building a custom fuzzer go to the [Tutorial chapter](./tutorial/intro.md) directly.
|
||||
> Its purpose is solely to show the main components of the library, for a more in-depth walkthrough on building a custom fuzzer go to the [Tutorial chapter](../tutorial/intro.md) directly.
|
||||
|
||||
## Creating a project
|
||||
|
11
docs/src/baby_fuzzer/more_examples.md
Normal file
11
docs/src/baby_fuzzer/more_examples.md
Normal file
@ -0,0 +1,11 @@
|
||||
# More Examples
|
||||
Examples can be found under `./fuzzer`.
|
||||
|
||||
|fuzzer name|usage|
|
||||
| ---- | ---- |
|
||||
| baby_fuzzer_gramatron | [Gramatron](https://github.com/HexHive/Gramatron) is a fuzzer that uses **grammar automatons** in conjunction with aggressive mutation operators to synthesize complex bug triggers |
|
||||
| baby_fuzzer_grimoire | [Grimoire](https://www.usenix.org/system/files/sec19-blazytko.pdf) is a fully automated coverage-guided fuzzer which works **without any form of human interaction or pre-configuration** |
|
||||
| baby_fuzzer_nautilus | [nautilus](https://www.ndss-symposium.org/wp-content/uploads/2019/02/ndss2019_04A-3_Aschermann_paper.pdf) is a **coverage guided, grammar based** fuzzer|
|
||||
|baby_fuzzer_tokens| basic **token level** fuzzer with token level mutations|
|
||||
|baby_fuzzer_with_forkexecutor| example for **InProcessForkExecutor**|
|
||||
|baby_no_std|a minimalistic example how to create a libafl based fuzzer that works on **`no_std`** environments like TEEs, Kernels or on barew metal|
|
@ -1,8 +1,15 @@
|
||||
# Baby fuzzer
|
||||
# Baby Gramatron
|
||||
|
||||
This is a minimalistic example about how to create a libafl based fuzzer.
|
||||
This fuzzer shows how to implement grammar-aware fuzzing. [Gramatron](https://github.com/HexHive/Gramatron) uses grammar automatons in conjunction with aggressive mutation operators to synthesize complex bug triggers. `auto.json` records grammar automaton of php,which is corresponding to `libafl::generators::Automaton`and serialized into `auto.postcard`. `libafl::generators::gramatron` will generate valid grammar sequences using `Automaton` and then pass them into `harness`. The function of `harness` is to print the original input.
|
||||
|
||||
It runs on a single core until a crash occurs and then exits.
|
||||
|
||||
The tested program is a simple Rust function without any instrumentation.
|
||||
For real fuzzing, you will want to add some sort to add coverage or other feedback.
|
||||
When you use `cargo run`, You may see output as follows:
|
||||
```
|
||||
b=mlhs_node.isz(c,c, )
|
||||
d=false.keyword__FILE__(c,b,a,b)
|
||||
a=select.Jan(d)
|
||||
a=first.literal( )
|
||||
b=[0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,0,nil].DomainError(c)
|
||||
next a
|
||||
b=Oo.gsub(a,d,b)
|
||||
d=0.hex( )
|
||||
```
|
@ -1,8 +1,7 @@
|
||||
# Baby fuzzer
|
||||
# baby grimoire fuzzer
|
||||
This fuzzer shows how to implement [Grimoire fuzzer](https://www.usenix.org/system/files/sec19-blazytko.pdf), a fully automated coverage-guided fuzzer which works without any form of human interaction or pre-configuration. `libafl::mutators::grimoire` provides four mutators :
|
||||
`GrimoireExtensionMutator`,`GrimoireRecursiveReplacementMutator`,
|
||||
`GrimoireStringReplacementMutator`,`GrimoireRandomDeleteMutator`.
|
||||
|
||||
This is a minimalistic example about how to create a libafl based fuzzer.
|
||||
|
||||
It runs on a single core until a crash occurs and then exits.
|
||||
|
||||
The tested program is a simple Rust function without any instrumentation.
|
||||
For real fuzzing, you will want to add some sort to add coverage or other feedback.
|
||||
The fuzzer will regard all files in `./corpus` as inputs. Inputs will be mutated by `mutator`(havoc_mutations) and `grimoire_mutator`. `harness` will firstly check if `input` contains substring `fn` or `pippopippo` then print the input mutated by `grimoire_mutator`.
|
||||
> **_NOTE:_** This harness is not designed for a crash, so `cargo run` will not terminate.
|
@ -1,8 +1,9 @@
|
||||
# Baby fuzzer
|
||||
|
||||
This is a minimalistic example about how to create a libafl based fuzzer.
|
||||
|
||||
It runs on a single core until a crash occurs and then exits.
|
||||
|
||||
The tested program is a simple Rust function without any instrumentation.
|
||||
For real fuzzing, you will want to add some sort to add coverage or other feedback.
|
||||
## baby nautilus fuzzer
|
||||
(Nautilus)[https://www.ndss-symposium.org/ndss-paper/nautilus-fishing-for-deep-bugs-with-grammars/] is a coverage-guided and grammar-based fuzzer. It needs to read the mruby's context-free grammar stored in `grammar.json`. And then use the corresponding feedback, generator, and mutator to fuzz.
|
||||
`libafl::mutators::nautilus` contains:
|
||||
```
|
||||
NautilusInput,NautilusContext
|
||||
NautilusChunksMetadata,NautilusFeedback
|
||||
NautilusGenerator
|
||||
NautilusRandomMutator,NautilusRecursionMutator,NautilusSpliceMutator
|
||||
```
|
@ -1,8 +1,17 @@
|
||||
# Baby fuzzer
|
||||
|
||||
This is a minimalistic example about how to create a libafl based fuzzer.
|
||||
|
||||
It runs on a single core until a crash occurs and then exits.
|
||||
|
||||
The tested program is a simple Rust function without any instrumentation.
|
||||
For real fuzzing, you will want to add some sort to add coverage or other feedback.
|
||||
# Baby tokens fuzzer
|
||||
1. `tokenizer` are used to split inputs into tokens
|
||||
2. `encoder_decoder` will give every new token a new id and record the mapping relation. Then it can convert tokens to `EncodedInput`, vice versa.
|
||||
3. `encoded_mutations` are used to deal with token level mutation, following is the definition:
|
||||
'''
|
||||
pub fn encoded_mutations() -> tuple_list_type!(
|
||||
EncodedRandMutator,
|
||||
EncodedIncMutator,
|
||||
EncodedDecMutator,
|
||||
EncodedAddMutator,
|
||||
EncodedDeleteMutator,
|
||||
EncodedInsertCopyMutator,
|
||||
EncodedCopyMutator,
|
||||
EncodedCrossoverInsertMutator,
|
||||
EncodedCrossoverReplaceMutator,
|
||||
)
|
||||
'''
|
@ -1,8 +1,2 @@
|
||||
# Baby fuzzer
|
||||
|
||||
This is a minimalistic example about how to create a libafl based fuzzer.
|
||||
|
||||
It runs on a single core until a crash occurs and then exits.
|
||||
|
||||
The tested program is a simple Rust function without any instrumentation.
|
||||
For real fuzzing, you will want to add some sort to add coverage or other feedback.
|
||||
# Baby fuzzer with forkexecutor
|
||||
Example for `InProcessForkExecutor`. Compared with `InProcessExecutor`, it needs additional param `shmem_provider` to make it work.
|
Loading…
x
Reference in New Issue
Block a user