comparator Package
comparator Package
comparator_utils Module
Code forked from: https://github.com/ocelma/python-recsys/
-
class pyradar.comparator.comparator_utils.Evaluation(data=None)[source]
Bases: object
Base class for Evaluation
It has the basic methods to load ground truth and test data.
Any other Evaluation class derives from this base class.
Parameters: | data (list) – A list of tuples, containing the real and the predicted value.
E.g: [(3, 2.3), (1, 0.9), (5, 4.9), (2, 0.9), (3, 1.5)] |
-
add(rating, rating_pred)[source]
Adds a tuple <real rating, pred. rating>
Parameters: |
- rating – a real rating value (the ground truth)
- rating_pred – the predicted rating
|
-
add_test(rating_pred)[source]
Adds a predicted rating to the current test list
Parameters: | rating_pred – the predicted rating |
-
compute()[source]
Computes the evaluation using the loaded ground truth and test lists
-
get_ground_truth()[source]
Returns: | the ground truth list |
-
get_test()[source]
Returns: | the test dataset (a list) |
-
load(ground_truth, test)[source]
Loads both the ground truth and the test lists. The two lists must have
the same length.
Parameters: |
- ground_truth (list) – a list of real values (aka ground truth).
E.g: [3.0, 1.0, 5.0, 2.0, 3.0]
- test (list) – a list of predicted values. E.g: [2.3, 0.9, 4.9, 0.9, 1.5]
|
-
load_ground_truth(ground_truth)[source]
Loads a ground truth dataset
Parameters: | ground_truth (list) – a list of real values (aka ground truth).
E.g: [3.0, 1.0, 5.0, 2.0, 3.0] |
-
load_test(test)[source]
Loads a test dataset
Parameters: | test (list) – a list of predicted values. E.g: [2.3, 0.9, 4.9, 0.9, 1.5] |
-
class pyradar.comparator.comparator_utils.MAE(data=None)[source]
Bases: pyradar.comparator.comparator_utils.Evaluation
Mean Absolute Error
Parameters: | data (<list, list>) – a tuple containing the Ground Truth data, and the Test data |
-
compute(r=None, r_pred=None)[source]
-
class pyradar.comparator.comparator_utils.Pearson(data=None)[source]
Bases: pyradar.comparator.comparator_utils.Evaluation
Pearson correlation
Parameters: | data (<list, list>) – a tuple containing the Ground Truth data, and the Test data |
-
compute()[source]
-
class pyradar.comparator.comparator_utils.RMSE(data=None)[source]
Bases: pyradar.comparator.comparator_utils.Evaluation
Root Mean Square Error
Parameters: | data (<list, list>) – a tuple containing the Ground Truth data, and the Test data |
-
compute(r=None, r_pred=None)[source]
image_comparator Module
-
class pyradar.comparator.image_comparator.BaseImageComparator(image_1, image_2)[source]
Bases: object
-
validate_images_are_comparable(image1, image2)[source]
-
exception pyradar.comparator.image_comparator.ComparatorException(value)[source]
Bases: exceptions.Exception
-
class pyradar.comparator.image_comparator.ImageComparator(image_1, image_2)[source]
Bases: pyradar.comparator.image_comparator.BaseImageComparator
-
calculate_mae()[source]
-
calculate_pearson()[source]
-
calculate_rmse1()[source]
One way to compute RMSE.
-
calculate_rmse2()[source]
Another way to compute RMSE.
-
compare_by(strategy, params)[source]
Image comparison entry point. Performs the comparison of the
images given in the initialization.
-
general_mean(params)[source]
-
linspace_rmse(params)[source]
-
mean_matrix(params)[source]
-
class pyradar.comparator.image_comparator.SimilarityMatrix[source]
Bases: object
Simple class to wrap, handle and return the matrix obtained as the
result of comparing two images.