RobustART.metrics package¶
Submodules¶
RobustART.metrics.base_evaluator module¶
- class RobustART.metrics.base_evaluator.Evaluator¶
Bases:
object
Base class for a evaluator
- static add_subparser(self, name, subparsers)¶
- eval(res_file, **kwargs)¶
This should return a dict with keys of metric names, values of metric values.
- Arguments:
res_file (str): file that holds classification results
- static from_args(cls, args)¶
RobustART.metrics.calibration_tools module¶
- RobustART.metrics.calibration_tools.aurra(confidence, correct)¶
- RobustART.metrics.calibration_tools.calib_err(confidence, correct, p='2', beta=100)¶
- RobustART.metrics.calibration_tools.fpr_and_fdr_at_recall(y_true, y_score, recall_level=0.95, pos_label=None)¶
- RobustART.metrics.calibration_tools.get_and_print_results(out_score, in_score, num_to_avg=1)¶
- RobustART.metrics.calibration_tools.get_measures(_pos, _neg, recall_level=0.95)¶
- RobustART.metrics.calibration_tools.print_measures(rms, aurra_metric, mad, sf1, method_name='Baseline')¶
- RobustART.metrics.calibration_tools.print_measures_old(auroc, aupr, fpr, method_name='Ours', recall_level=0.95)¶
- RobustART.metrics.calibration_tools.print_measures_with_std(aurocs, auprs, fprs, method_name='Ours', recall_level=0.95)¶
- RobustART.metrics.calibration_tools.show_calibration_results(confidence, correct, method_name='Baseline')¶
- RobustART.metrics.calibration_tools.soft_f1(confidence, correct)¶
- RobustART.metrics.calibration_tools.stable_cumsum(arr, rtol=1e-05, atol=1e-08)¶
Use high precision for cumsum and check that final value matches sum Parameters ———- arr : array-like
To be cumulatively summed as flat
- rtolfloat
Relative tolerance, see
np.allclose
- atolfloat
Absolute tolerance, see
np.allclose
- RobustART.metrics.calibration_tools.tune_temp(logits, labels, binary_search=True, lower=0.2, upper=5.0, eps=0.0001)¶
RobustART.metrics.imageneta_evaluator module¶
- class RobustART.metrics.imageneta_evaluator.ImageNetAEvaluator¶
Bases:
RobustART.metrics.base_evaluator.Evaluator
A class for eval imagenet-a
- static add_subparser(name, subparsers)¶
- clear()¶
Clear the result. You Should use it every time when you change another model but using the same evaluator
- eval(res_file, perturbation=None)¶
- Parameters
res_file – result file that store the result
perturbation –
- Returns
the dict of imagenet-a result
- classmethod from_args(args)¶
- get_mean()¶
- Returns
The mean value of the metirc
- load_res(res_file)¶
Load results from file.
RobustART.metrics.imagenetc_evaluator module¶
- class RobustART.metrics.imagenetc_evaluator.ClsMetric(metric_dict={})¶
Bases:
RobustART.metrics.base_evaluator.Metric
Metric for imagenet-c
- set_cmp_key(key)¶
- class RobustART.metrics.imagenetc_evaluator.ImageNetCEvaluator(topk=[1, 5])¶
Bases:
RobustART.metrics.base_evaluator.Evaluator
Evaluator for imagenet-c
- static add_subparser(name, subparsers)¶
- eval(res_file)¶
- Parameters
res_file – File that store the result
- Returns
Imagenet-c metric for one model
- classmethod from_args(args)¶
- load_res(res_file)¶
Load results from file.
RobustART.metrics.imageneto_evaluator module¶
- class RobustART.metrics.imageneto_evaluator.ImageNetOEvaluator¶
Bases:
RobustART.metrics.base_evaluator.Evaluator
- static add_subparser(name, subparsers)¶
- clear()¶
- eval(res_file_in=None, res_file_out=None)¶
This should return a dict with keys of metric names, values of metric values.
- Arguments:
res_file (str): file that holds classification results
- classmethod from_args(args)¶
- get_mean()¶
- load_res(res_file)¶
Load results from file.
RobustART.metrics.imagenetp_evaluator module¶
- class RobustART.metrics.imagenetp_evaluator.ImageNetPEvaluator¶
Bases:
RobustART.metrics.base_evaluator.Evaluator
- static add_subparser(name, subparsers)¶
- clear()¶
- eval(res_file, perturbation=None)¶
This should return a dict with keys of metric names, values of metric values.
- Arguments:
res_file (str): file that holds classification results
- classmethod from_args(args)¶
- get_mean()¶
- load_res(res_file)¶
Load results from file.
RobustART.metrics.imagenets_evaluator module¶
- class RobustART.metrics.imagenets_evaluator.ImageNetSEvaluator¶
Bases:
RobustART.metrics.base_evaluator.Evaluator
- static add_subparser(name, subparsers)¶
- clear()¶
- eval(res_file, decoder_type='pil', resize_type='pil-bilinear')¶
This should return a dict with keys of metric names, values of metric values.
- Arguments:
res_file (str): file that holds classification results
- classmethod from_args(args)¶
- get_mean()¶
- get_std()¶
- load_res(res_file)¶
Load results from file.