Skip to content

Commit a2264c2

Browse files
committed
Updated docs with thumbnails
1 parent 150acc3 commit a2264c2

8 files changed

Lines changed: 23 additions & 20 deletions

File tree

CHANGES.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -102,3 +102,4 @@ v<1.0.1>, <08/10/2025> -- Updated tests with random state and better test scores
102102
v<1.0.2>, <10/10/2025> -- Removed Python 3.9 from testing
103103
v<1.0.2>, <10/10/2025> -- Python 3.13 support
104104
v<1.0.2>, <11/30/2025> -- Fixed COMB max_contam empty list issue
105+
V<1.0.3>, <01/19/2026> -- Updated docs with thumbnails

docs/benchmark.rst

Lines changed: 9 additions & 9 deletions
Original file line numberDiff line numberDiff line change
@@ -97,7 +97,7 @@ uncertainty about its mean and is the most robust (best least accurate
9797
prediction). However, for interpretability and general performance the
9898
``MIXMOD, FILTER,`` and ``META`` thresholders are good fits.
9999

100-
.. figure:: figs/Benchmark1.png
100+
.. thumbnail:: figs/Benchmark1.png
101101
:alt: Benchmark defaults
102102

103103
----
@@ -117,7 +117,7 @@ dataset with fewer examples and a greater bias.
117117
:file: tables/Benchmark2.csv
118118
:class: sphinx-datatable
119119

120-
.. figure:: figs/Benchmark2.png
120+
.. thumbnail:: figs/Benchmark2.png
121121
:alt: Benchmark all
122122

123123
----
@@ -131,7 +131,7 @@ similar setup is followed as the first benchmark test, however, the
131131
labels were set using the true contamination applied to the decomposed
132132
scores as the right-hand component of the MCC deterioration equation.
133133

134-
.. figure:: figs/Multi1.png
134+
.. thumbnail:: figs/Multi1.png
135135
:alt: Benchmark multiple
136136

137137
However, to effectively compare whether the multiple outlier detection
@@ -147,7 +147,7 @@ From this, it can be shown that by using a multiple outlier likelihood
147147
score set it generally performs better than using a single outlier
148148
likelihood scores set.
149149

150-
.. figure:: figs/Multi2.png
150+
.. thumbnail:: figs/Multi2.png
151151
:alt: Benchmark multiple comparison
152152

153153
----
@@ -201,10 +201,10 @@ methods produced results that were comparable to their inputs.
201201
| COMB5 | COMB(method='stacked') |
202202
+---------------+---------------------------------------+
203203

204-
.. figure:: figs/Comb1.png
204+
.. thumbnail:: figs/Comb1.png
205205
:alt: Combination Performance
206206

207-
.. figure:: figs/Comb2.png
207+
.. thumbnail:: figs/Comb2.png
208208
:alt: Combination Close Up
209209

210210
----
@@ -232,7 +232,7 @@ potential to over predict will vary significantly based on the selected
232232
dataset and outlier detection method, and therefore it is important to
233233
check the predicted contamination level after thresholding.
234234

235-
.. figure:: figs/Overpred.png
235+
.. thumbnail:: figs/Overpred.png
236236
:alt: Over prediction
237237

238238
A second over predictive evaluation can also be done, but now with
@@ -243,7 +243,7 @@ even beyond the best contamination level. However, now some clear well
243243
performing thresholders can be matched to the previous benchmarking,
244244
notably ``META`` and ``FILTER``.
245245

246-
.. figure:: figs/Overpred_best.png
246+
.. thumbnail:: figs/Overpred_best.png
247247
:alt: Over prediction best
248248

249249
----
@@ -272,7 +272,7 @@ setting different random states (e.g. ``COMB(thresholders =
272272
DSN(random_state=111222)])``). This should provide a more robust and
273273
reliable result.
274274

275-
.. figure:: figs/Randomness.png
275+
.. thumbnail:: figs/Randomness.png
276276
:alt: Effects of Randomness
277277

278278
----

docs/conf.py

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -64,6 +64,7 @@
6464
'sphinxcontrib.bibtex',
6565
'sphinx.ext.napoleon',
6666
'sphinx_rtd_theme',
67+
'sphinxcontrib.images',
6768
'sphinxcontrib.jquery',
6869
'sphinx_datatables'
6970
]

docs/confidence.rst

Lines changed: 2 additions & 2 deletions
Original file line numberDiff line numberDiff line change
@@ -187,12 +187,12 @@ Below are two scatter plots of the results from the example code above.
187187
However, in the second plot the use of a classification type thresholder
188188
``CLF`` has been employed.
189189

190-
.. figure:: figs/Conf1.png
190+
.. thumbnail:: figs/Conf1.png
191191
:alt: Scatter plot of the above example 1
192192

193193
Figure 1: Scatter plot of the ``CONF`` evaluated results using ``IQR``.
194194

195-
.. figure:: figs/Conf2.png
195+
.. thumbnail:: figs/Conf2.png
196196
:alt: Scatter plot of the above example 2
197197

198198
Figure 2: Scatter plot of the ``CONF`` evaluated results using ``CLF``.

docs/example.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -118,7 +118,7 @@ and threshold the outlier detection scores.
118118
save_figure=False,
119119
)
120120
121-
.. figure:: figs/KNN_KARCH.png
121+
.. thumbnail:: figs/KNN_KARCH.png
122122
:alt: karch demo
123123

124124
----

docs/index.rst

Lines changed: 1 addition & 1 deletion
Original file line numberDiff line numberDiff line change
@@ -220,7 +220,7 @@ Unsupervised Anomaly Detection <https://arxiv.org/abs/2210.10487>`_
220220

221221
**The comparison among of implemented models** is made available below:
222222

223-
.. figure:: figs/All.png
223+
.. thumbnail:: figs/All.png
224224
:alt: Comparison of selected models
225225

226226
############################

docs/ranking.rst

Lines changed: 7 additions & 7 deletions
Original file line numberDiff line numberDiff line change
@@ -196,22 +196,22 @@ In order to get a better understanding on how these six proxy-metrics
196196
performed overall, joyplots below demonstrate the distributions of their
197197
Pearson's correlation with respect to the MCC scores for each statistic.
198198

199-
.. figure:: figs/Rank1.png
199+
.. thumbnail:: figs/Rank1.png
200200
:alt: Total Average Pearson's score
201201

202202
Figure 1: Total average Pearson's score of selected proxy metrics.
203203

204-
.. figure:: figs/Rank2.png
204+
.. thumbnail:: figs/Rank2.png
205205
:alt: Mean Pearson's score Across Datasets
206206

207207
Figure 2: Mean Pearson's score across datasets for selected proxy metrics.
208208

209-
.. figure:: figs/Rank3.png
209+
.. thumbnail:: figs/Rank3.png
210210
:alt: Median Pearson's score Across Datasets
211211

212212
Figure 3: Median Pearson's score across datasets for selected proxy metrics.
213213

214-
.. figure:: figs/Rank4.png
214+
.. thumbnail:: figs/Rank4.png
215215
:alt: Standard Deviation Pearson's score Across Datasets
216216

217217
Figure 4: Standard deviation Pearson's score across datasets for selected proxy metrics.
@@ -253,12 +253,12 @@ RankDCG results to.
253253
The joyplots below indicate the performance between the methods with
254254
respect aggregation across each dataset.
255255

256-
.. figure:: figs/Rank5.png
256+
.. thumbnail:: figs/Rank5.png
257257
:alt: RankDCG scores for each test
258258

259259
Figure 5: RankDCG scores for each tested combination.
260260

261-
.. figure:: figs/Rank6.png
261+
.. thumbnail:: figs/Rank6.png
262262
:alt: Mean RankDCG scores per dataset
263263

264264
Figure 6: Mean RankDCG scores for each tested combination agregated per dataset.
@@ -270,7 +270,7 @@ not overfitted and is able to generalize, the datasets ``mammography,
270270
skin``, and ``smtp`` were tested as the model had not been trained on
271271
them.
272272

273-
.. figure:: figs/Rank7.png
273+
.. thumbnail:: figs/Rank7.png
274274
:alt: Mean RankDCG scores per test dataset
275275

276276
Figure 7: Mean RankDCG scores for each tested combination aggregated per test dataset.

docs/requirements.txt

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ scipy
1212
sphinx-datatables
1313
sphinx-rtd-theme
1414
sphinxcontrib-bibtex
15+
sphinxcontrib-images
1516
torch
1617
tqdm
1718
xgboost

0 commit comments

Comments
 (0)