solaris.eval API reference

solaris.eval.base Base evaluator class

class solaris.eval.base.Evaluator(ground_truth_vector_file)[source]

Object to test IoU for predictions and ground truth polygons.

ground_truth_fname

The filename for the ground truth CSV or JSON.

Type

str

ground_truth_GDF

A geopandas.GeoDataFrame containing the ground truth vector labels.

Type

geopandas.GeoDataFrame

ground_truth_GDF_Edit

A copy of ground_truth_GDF which will be manipulated during processing.

Type

geopandas.GeoDataFrame

proposal_GDF

The proposal geopandas.GeoDataFrame, added using load_proposal().

Type

geopandas.GeoDataFrame

Parameters

ground_truth_vector_file (str) – Path to .geojson file for ground truth.

eval_iou(miniou=0.5, iou_field_prefix='iou_score', ground_truth_class_field='', calculate_class_scores=True, class_list=['all'])[source]

Evaluate IoU between the ground truth and proposals.

Parameters
  • miniou (float, optional) – Minimum intersection over union score to qualify as a successful object detection event. Defaults to 0.5.

  • iou_field_prefix (str, optional) – The name of the IoU score column in self.proposal_GDF. Defaults to "iou_score".

  • ground_truth_class_field (str, optional) – The column in self.ground_truth_GDF that indicates the class of each polygon. Required if using calculate_class_scores.

  • calculate_class_scores (bool, optional) – Should class-by-class scores be calculated? Defaults to True.

  • class_list (list, optional) – List of classes to be scored. Defaults to ['all'] (score all classes).

Returns

scoring_dict_list – list of score output dicts for each image in the ground truth and evaluated image datasets. The dicts contain the following keys:

('class_id', 'iou_field', 'TruePos', 'FalsePos', 'FalseNeg',
'Precision', 'Recall', 'F1Score')

Return type

list

eval_iou_return_GDFs(miniou=0.5, iou_field_prefix='iou_score', ground_truth_class_field='', calculate_class_scores=True, class_list=['all'])[source]

Evaluate IoU between the ground truth and proposals. :param miniou: Minimum intersection over union score to qualify as a successful

object detection event. Defaults to 0.5.

Parameters
  • iou_field_prefix (str, optional) – The name of the IoU score column in self.proposal_GDF. Defaults to "iou_score".

  • ground_truth_class_field (str, optional) – The column in self.ground_truth_GDF that indicates the class of each polygon. Required if using calculate_class_scores.

  • calculate_class_scores (bool, optional) – Should class-by-class scores be calculated? Defaults to True.

  • class_list (list, optional) – List of classes to be scored. Defaults to ['all'] (score all classes).

Returns

  • scoring_dict_list (list) – list of score output dicts for each image in the ground truth and evaluated image datasets. The dicts contain the following keys:

    ('class_id', 'iou_field', 'TruePos', 'FalsePos', 'FalseNeg',
    'Precision', 'Recall', 'F1Score')
    
  • True_Pos_gdf (gdf) – A geodataframe containing only true positive predictions

  • False_Neg_gdf (gdf) – A geodataframe containing only false negative predictions

  • False_Pos_gdf (gdf) – A geodataframe containing only false positive predictions

eval_iou_spacenet_csv(miniou=0.5, iou_field_prefix='iou_score', imageIDField='ImageId', debug=False, min_area=0)[source]

Evaluate IoU between the ground truth and proposals in CSVs.

Parameters
  • miniou (float , optional) – Minimum intersection over union score to qualify as a successful object detection event. Defaults to 0.5.

  • iou_field_prefix (str , optional) – The name of the IoU score column in self.proposal_GDF. Defaults to "iou_score" .

  • imageIDField (str , optional) – The name of the column corresponding to the image IDs in the ground truth data. Defaults to "ImageId".

  • debug (bool , optional) – Argument for verbose execution during debugging. Defaults to False (silent execution).

  • min_area (float or int , optional) – Minimum area of a ground truth polygon to be considered during evaluation. Often set to 20 in SpaceNet competitions. Defaults to 0 (consider all ground truth polygons).

Returns

scoring_dict_list – list of score output dicts for each image in the ground truth and evaluated image datasets. The dicts contain the following keys:

('imageID', 'iou_field', 'TruePos', 'FalsePos', 'FalseNeg',
'Precision', 'Recall', 'F1Score')

Return type

list

get_iou_by_building()[source]

Returns a copy of the ground truth table, which includes a per-building IoU score column after eval_iou_spacenet_csv() has run.

load_proposal(proposal_vector_file, conf_field_list=['conf'], proposalCSV=False, pred_row_geo_value='PolygonWKT_Pix', conf_field_mapping=None)[source]

Load in a proposal geojson or CSV.

Parameters
  • proposal_vector_file (str) – Path to the file containing proposal vector objects. This can be a .geojson or a .csv.

  • conf_field_list (list, optional) – List of columns corresponding to confidence value(s) in the proposal vector file. Defaults to ['conf'].

  • proposalCSV (bool, optional) – Is the proposal file a CSV? Defaults to no (False), in which case it’s assumed to be a .geojson.

  • pred_row_geo_value (str, optional) – The name of the geometry-containing column in the proposal vector file. Defaults to 'PolygonWKT_Pix'. Note: this method assumes the geometry is in WKT format.

  • conf_field_mapping (dict, optional) – '__max_conf_class' column value:class ID mapping dict for multiclass use. Only required in multiclass cases.

Returns

Return type

0 upon successful completion.

Notes

Loads in a .geojson or .csv-formatted file of proposal polygons for comparison to the ground truth and stores it as part of the Evaluator instance. This method assumes the geometry contained in the proposal file is in WKT format.

load_truth(ground_truth_vector_file, truthCSV=False, truth_geo_value='PolygonWKT_Pix')[source]

Load in the ground truth geometry data.

Parameters
  • ground_truth_vector_file (str) – Path to the ground truth vector file. Must be either .geojson or .csv format.

  • truthCSV (bool, optional) – Is the ground truth a CSV? Defaults to False, in which case it’s assumed to be a .geojson.

  • truth_geo_value (str, optional) – Column of the ground truth vector file that corresponds to geometry.

Returns

Return type

Nothing.

Notes

Loads the ground truth vector data into the Evaluator instance.

solaris.eval.base.eval_base(ground_truth_vector_file, csvFile=False, truth_geo_value='PolygonWKT_Pix')[source]

Deprecated API to Evaluator.

Deprecated since version 0.3: Use Evaluator instead.

solaris.eval.pixel Pixel-wise scoring functions

solaris.eval.pixel.f1(truth_mask, prop_mask, prop_threshold=0.5, show_plot=False, im_file='', show_colorbar=False, plot_file='', dpi=200, verbose=False)[source]

Compute pixel-wise precision, recall, and f1 score.

Find true pos, false pos, true neg, false neg as well as f1 score. Multiply truth_mask by 2, and subtract. Make sure arrays are clipped so that overlapping regions don’t cause problems.

Parameters
  • truth_mask (numpy.ndarray) – 2-D binary array of ground truth pixels.

  • prop_mask (numpy.ndarray) – 2-D array of proposals.

  • prop_threshold (float, optional) – The threshold for proposal values to be defined as positive (1) or negative (0) predictions. Values >= prop_threshold will be set to 1, values < prop_threshold will be set to 0.

  • show_plot (bool, optional) – Switch to plot the outputs. Defaults to False.

  • im_file (str, optional) – Image file corresponding to the masks. Ignored if show_plot == False. Defaults to ''.

  • show_colorbar (bool, optional) – Switch to show colorbar. Ignored if show_plot == False. Defaults to False.

  • plot_file (str, optional) – Output file if plotting. Ignored if show_plot == False. Defaults to ''.

  • dpi (int, optional) – Dots per inch for plotting. Ignored if show_plot == False. Defaults to 200.

  • verbose (bool, optional) – Switch to print relevant values.

Returns

  • f1 (float) – Pixel-wise F1 score.

  • precision (float) – Pixel-wise precision.

  • recall (float) – Pixel-wise recall.

solaris.eval.pixel.iou(truth_mask, prop_mask, prop_threshold=0.5, verbose=False)[source]

Compute pixel-wise intersection over union.

Multiplies truth_mask by 2, and subtract. Make sure arrays are clipped so that overlapping regions don’t cause problems

Parameters
  • truth_mask (numpy.ndarray) – 2-D binary array of ground truth pixels.

  • prop_mask (numpy.ndarray) – 2-D array of proposals.

  • prop_threshold (float, optional) – The threshold for proposal values to be defined as positive (1) or negative (0) predictions. Values >= prop_threshold will be set to 1, values < prop_threshold will be set to 0.

  • verbose (bool, optional) – Switch to print relevant values.

Returns

iou – Intersection over union of ground truth and proposal

Return type

float

solaris.eval.pixel.relaxed_f1(truth_mask, prop_mask, radius=3, verbose=False)[source]

Compute relaxed f1 score

Notes

Also find relaxed precision, recall, f1. http://www.cs.toronto.edu/~fritz/absps/road_detection.pdf

“completenetess represents the fraction of true road pixels that are within ρ pixels of a predicted road pixel, while correctness measures the fraction of predicted road pixels that are within ρ pixels of a true road pixel.”

https://arxiv.org/pdf/1711.10684.pdf

The relaxed precision is defined as the fraction of number of pixels predicted as road within a range of ρ pixels from pixels labeled as road. The relaxed recall is the fraction of number of pixels labeled as road that are within a range of ρ pixels from pixels predicted as road.

http://ceur-ws.org/Vol-156/paper5.pdf

Parameters
  • truth_mask (np array) – 2-D array of ground truth.

  • prop_mask (np array) – 2-D array of proposals.

  • radius (int) – Radius in pixels to use for relaxed f1.

  • verbose (bool) – Switch to print relevant values

Returns

output – Tuple containing [relaxed_f1, relaxed_precision, relaxed_recall]

Return type

tuple

Examples

>>> truth_mask = np.zeros(shape=(10, 10))
>>> prop_mask = np.zeros(shape=(10, 10))
>>> truth_mask[5, :] = 1
>>> prop_mask[5, :] = 1
>>> prop_mask[:, 2] = 0
>>> prop_mask[:, 3] = 1
>>> prop_mask[6:8, :] = 0
>>> prop_mask
array([[0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
       [1., 1., 0., 1., 1., 1., 1., 1., 1., 1.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 1., 0., 0., 0., 0., 0., 0.]])
>>>truth_mask
array([[0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [1., 1., 1., 1., 1., 1., 1., 1., 1., 1.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.],
       [0., 0., 0., 0., 0., 0., 0., 0., 0., 0.]])
>>> relaxed_f1(truth_mask, prop_mask, radius=3)
(0.8571428571428571, 0.75, 1.0)

solaris.eval.iou IoU scoring functions

solaris.eval.iou.calculate_iou(pred_poly, test_data_GDF)[source]

Get the best intersection over union for a predicted polygon.

Parameters
  • pred_poly (shapely.Polygon) – Prediction polygon to test.

  • test_data_GDF (geopandas.GeoDataFrame) – GeoDataFrame of ground truth polygons to test pred_poly against.

Returns

iou_GDF – A subset of test_data_GDF that overlaps pred_poly with an added column iou_score which indicates the intersection over union value.

Return type

geopandas.GeoDataFrame

solaris.eval.iou.process_iou(pred_poly, test_data_GDF, remove_matching_element=True)[source]

Get the maximum intersection over union score for a predicted polygon.

Parameters
  • pred_poly (shapely.geometry.Polygon) – Prediction polygon to test.

  • test_data_GDF (geopandas.GeoDataFrame) – GeoDataFrame of ground truth polygons to test pred_poly against.

  • remove_matching_element (bool, optional) – Should the maximum IoU row be dropped from test_data_GDF? Defaults to True.

Returns

Return type

*This function doesn’t currently return anything.*

solaris.eval.challenges SpaceNet Challenge scoring functionality

solaris.eval.challenges.get_chip_id(chip_name, challenge='spacenet_2')[source]

Get the unique identifier for a chip location from SpaceNet images.

Parameters
  • chip_name (str) – The name of the chip to extract the identifier from.

  • challenge (str, optional) – One of ['spacenet_2', 'spacenet_3', 'spacenet_off_nadir', 'spacenet_6']. The name of the challenge that chip_name came from. Defaults to 'spacenet_2'.

Returns

chip_id – The unique identifier for the chip location.

Return type

str

solaris.eval.challenges.multi_temporal_buildings(prop_csv, truth_csv, miniou=0.25, min_area=4.0, beta=2.0, stats=False, verbose=False)[source]

Evaluate submissions to SpaceNet 7: Multi-Temporal Urban Development Input CSV files should have “filename”, “id”, and “geometry” columns.

solaris.eval.challenges.off_nadir_buildings(prop_csv, truth_csv, image_columns={}, miniou=0.5, min_area=20, verbose=False)[source]

Evaluate an off-nadir competition proposal csv.

Uses Evaluator to evaluate off-nadir challenge proposals. See image_columns in the source code for how collects are broken into Nadir, Off-Nadir, and Very-Off-Nadir bins.

Parameters
  • prop_csv (str) – Path to the proposal polygon CSV file.

  • truth_csv (str) – Path to the ground truth polygon CSV file.

  • image_columns (dict, optional) – dict of (collect: nadir bin) pairs used to separate collects into sets. Nadir bin values must be one of ["Nadir", "Off-Nadir", "Very-Off-Nadir"] . See source code for collect name options.

  • miniou (float, optional) – Minimum IoU score between a region proposal and ground truth to define as a successful identification. Defaults to 0.5.

  • min_area (float or int, optional) – Minimum area of ground truth regions to include in scoring calculation. Defaults to 20.

Returns

results_DFpd.DataFrame

Summary pd.DataFrame of score outputs grouped by nadir angle bin, along with the overall score.

results_DF_Fullpd.DataFrame

pd.DataFrame of scores by individual image chip across the ground truth and proposal datasets.

Return type

results_DF, results_DF_Full

solaris.eval.challenges.spacenet_buildings_2(prop_csv, truth_csv, miniou=0.5, min_area=20, challenge='spacenet_2')[source]

Evaluate a SpaceNet building footprint competition proposal csv.

Uses Evaluator to evaluate SpaceNet challenge proposals.

Parameters
  • prop_csv (str) – Path to the proposal polygon CSV file.

  • truth_csv (str) – Path to the ground truth polygon CSV file.

  • miniou (float, optional) – Minimum IoU score between a region proposal and ground truth to define as a successful identification. Defaults to 0.5.

  • min_area (float or int, optional) – Minimum area of ground truth regions to include in scoring calculation. Defaults to 20.

  • challenge (str, optional) – The challenge id for evaluation. One of ['spacenet_2', 'spacenet_3', 'spacenet_off_nadir', 'spacenet_6']. The name of the challenge that chip_name came from. Defaults to 'spacenet_2'.

Returns

results_DFpd.DataFrame

Summary pd.DataFrame of score outputs grouped by nadir angle bin, along with the overall score.

results_DF_Fullpd.DataFrame

pd.DataFrame of scores by individual image chip across the ground truth and proposal datasets.

Return type

results_DF, results_DF_Full