Global Verification Projects

Lindsey and I have recently been encouraging the community to take part in a new campaign where we asked you to verify our automatically detected map objects. The campaign runs until each of the nine tasks receives 10,000 verifications, and the person with the most at the end will win a Mapillary edition of the BlackVue DR900s dashcam.

Why take part?

Verifications help improve the map data that is derived from images in two ways.

  1. It facilitates human review of AI derived datasets. If we identify 10 street-lights in an area, but only 9 are correct, a verification project allows us to remove that false positive and correct the dataset.
  2. It helps to train future algorithms that will better detect those street-lights in future. Over time the computer vision team incorporates the data collected from these projects and uses it to improve the performance of recall and accuracy.

The data derived from each image is used by government agencies, NGOs, mapping companies and others. The OpenStreetMap community is also able to download these datasets and add objects like trash cans and bicycle lanes to the map.

So what am I verifying?

Some of the tasks quickly hit 10,000 verifications, but others seem to be a bit trickier. So to help clarify things, here is a guide with a description and images for five of the verification tasks that still have space for people to move up the leaderboards. Remember that for each object, you can click on the image to view the full extent for a better look and more context.

We encourage everyone to participate in the discussion here and to help each other with the tasks. Good luck, verifiers!

Banner

Typically a flag made of textile or plastic.

Catch basin

Sewage water drainage, typically with a grid on top or inlet of a curb.

Junction box

An electrical junction box can contain junctions of electric wires and/or cables.

Mailbox

A receptacle for sending or receiving mail. These can be residential or public.

Water valve

These access points are smaller than manholes and usually found close to the curb.

2 Likes

I think it’d probably be easier if these projects were more local. I don’t really know what foriegn catchment basins look like, places with regular storms have different types. Here in the UK though they’re quite distinctive, I could recognise one of ours at quite low resolution if I knew that’s what I was looking for. The same applies to post boxes, bus stops, types of signage etc.

edit:

Also, unless you’re looking to model the vision of a small number of high-volume contributors, a lottery based on contribution count would probably be a better incentive as it the “winner takes it all” approach discourages everyone but a small number of the most determined. There’s no way I’m gonna win this, so having a play with it out of interest is my only incentive so I’ve only done a couple of hundred.

3 Likes

We’re doing many local verification projects, but the Vision team is gathering global verification data as well because it helps to have a general understanding of what objects like a manhole look like in different parts of the world. That way the algorithm will be able to distinguish them whether they are round or rectangular.

In terms of competition dynamics, this is the first time we’ve done this. That’s good feedback so we could experiment with a lottery type method next time.

2 Likes

If anyone is trying to decide which particular task to partake in, the Catch Basin verification could use help: there are lots of false detections.

2 Likes

I don’t understand one thing: if two participants confirm each other’s decision they both get a point so for two points in total there is only one picture assuming only two people check each photo. Then where is the 10 000 you are talking about? Is it still sum of the whole leaderboard or number of checked images which we participants cannot see?

1 Like

If the 10 000 is the sum of the leaderboard then it seems that the competition is over.

1 Like

You get a point for each detection that another user has voted on in the same way as you did, for example if you approve a detection that’s been approved by another user, or if another user rejects a detection that you have also rejected. The order plays no role - both of you get a point when the image is similarly voted on for the second time. The sums on the leaderboard are the numbers we’d using for this, as they are the numbers of detections you’ve made that have been approved by at least one other user agreeing with your verification. The competition will last when all of the tasks reach 10,000 verified detections.Let me know if you have any questions. Thanks!

2 Likes

10,000 represents verified detections, which is when two or more people validate the detection in the same way. Unfortunately this number is not currently publicly available, but we are sharing the results on social. In future we will make this number public to make it easier for people to know which verification tasks need completing.

2 Likes

So we are talking here about UNIQUE verified detections?
So if the task has 10 members who has cross verified 5000 images each (leader board showing 50 000 total), this will not be assumed as complete , while all the members verified the same detections, so only 5000 are unique.
Correct?

Can you tell me which tasks need completing and how many points are left? I have not seen any update on Twitter for a few days.

That is spot on gpsmapper. Unfortunately this number (unique verified detections as you call it) is not shown on the main leaderboard yet, but for now we’ll update everyone on which tasks still need validation.

Going forward we’ll make improvements to the tool that make it better suited to this.

Just junction boxes. Just a <2,000 more to go and we can close that one up.