Scale vision uncertainty by distance for mapping
Uncertainty should be based on distance since we get worse detections
farther out. Also, when we eventually do live localization based on the april tags,
we'll need to weigh them based on a distance-based uncertainty if we see
more than one at the same time.
Open to suggestions for a more principled distanced-based uncertainty.
Signed-off-by: Milind Upadhyay <milind.upadhyay@gmail.com>
Change-Id: I2e5b4e9e5de493a7dfb2bcaac739ab62f2ca65a2
diff --git a/frc971/vision/target_mapper.h b/frc971/vision/target_mapper.h
index 16a8343..edf2260 100644
--- a/frc971/vision/target_mapper.h
+++ b/frc971/vision/target_mapper.h
@@ -101,6 +101,9 @@
aos::distributed_clock::time_point time;
// Pose of target relative to robot
Eigen::Affine3d H_robot_target;
+ // Horizontal distance from camera to target, used for confidence
+ // calculation
+ double distance_from_camera;
TargetMapper::TargetId id;
};
@@ -131,7 +134,8 @@
// this matrix the "information"
static Eigen::Matrix3d ComputeConfidence(
aos::distributed_clock::time_point start,
- aos::distributed_clock::time_point end);
+ aos::distributed_clock::time_point end, double distance_from_camera_start,
+ double distance_from_camera_end);
private:
static ceres::examples::Pose2d InterpolatePose(