updated readme
This commit is contained in:
parent
50638fbae0
commit
403aa1cfec
8
README
8
README
|
@ -5,5 +5,9 @@ To solve this, you can add models depending on correlations between them. But in
|
|||
|
||||
n2ulayer.py and mu.py define this special kind of neural network. loss.py defines the correlation we want to minimize for use in tensorflow.
|
||||
onemodel.py generates a quick (and quite random) anomaly detection model for use on the data defined in data.py (just a 2d gaussian). 20 models are generated and their predictions (sorted from most normal (green) to most anomal (red)) drawn in the numbered images in imgs
|
||||
If you use all 20 models and simply average them this results in imgs/recombine.png. Notice how the green points are much more centered.
|
||||
choosenext
|
||||
If you use all 20 models and simply average them this results in imgs/recombine.png. Notice how the green points are much more centered. (This image is created by recombine.py)
|
||||
choosenext.py creates and uses the tensorflow model to find a list of predictions that are least correlated to a given list of predictions
|
||||
main.py uses this to combine a random model (before.png) with a combination of 4 models (suggestion.png) into updated.png. Notice how the area is covered much better in updated.png then in before.png.
|
||||
Youre task would be to extend this method to be able to combine arbitrary many models (use remainder in main.py, find a better combination function than combine(a,b) in main.py and introduce and exit condition) and test if this method results in more stable/powerful ensembles.
|
||||
If you have any questions, please feel free to write an email to Simon.Kluettermann@cs.tu-dortmund.de
|
||||
|
||||
|
|
Loading…
Reference in New Issue