Verifications help improve the map data that is derived from images in two ways.
It facilitates human review of AI derived datasets. If we identify 10 street-lights in an area, but only 9 are correct, a verification project allows us to remove that false positive and correct the dataset.
It helps to train future algorithms that will better detect those street-lights in future. Over time the computer vision team incorporates the data collected from these projects and uses it to improve the performance of recall and accuracy.
The data derived from each image is used by government agencies, NGOs, mapping companies and others. The OpenStreetMap community is also able to download these datasets and add objects like trash cans and bicycle lanes to the map.
So what am I verifying?
Some of the tasks quickly hit 10,000 verifications, but others seem to be a bit trickier. So to help clarify things, here is a guide with a description and images for five of the verification tasks that still have space for people to move up the leaderboards. Remember that for each object, you can click on the image to view the full extent for a better look and more context.
We encourage everyone to participate in the discussion here and to help each other with the tasks. Good luck, verifiers!
I think it’d probably be easier if these projects were more local. I don’t really know what foriegn catchment basins look like, places with regular storms have different types. Here in the UK though they’re quite distinctive, I could recognise one of ours at quite low resolution if I knew that’s what I was looking for. The same applies to post boxes, bus stops, types of signage etc.
edit:
Also, unless you’re looking to model the vision of a small number of high-volume contributors, a lottery based on contribution count would probably be a better incentive as it the “winner takes it all” approach discourages everyone but a small number of the most determined. There’s no way I’m gonna win this, so having a play with it out of interest is my only incentive so I’ve only done a couple of hundred.
We’re doing many local verification projects, but the Vision team is gathering global verification data as well because it helps to have a general understanding of what objects like a manhole look like in different parts of the world. That way the algorithm will be able to distinguish them whether they are round or rectangular.
In terms of competition dynamics, this is the first time we’ve done this. That’s good feedback so we could experiment with a lottery type method next time.
I don’t understand one thing: if two participants confirm each other’s decision they both get a point so for two points in total there is only one picture assuming only two people check each photo. Then where is the 10 000 you are talking about? Is it still sum of the whole leaderboard or number of checked images which we participants cannot see?
You get a point for each detection that another user has voted on in the same way as you did, for example if you approve a detection that’s been approved by another user, or if another user rejects a detection that you have also rejected. The order plays no role - both of you get a point when the image is similarly voted on for the second time. The sums on the leaderboard are the numbers we’d using for this, as they are the numbers of detections you’ve made that have been approved by at least one other user agreeing with your verification. The competition will last when all of the tasks reach 10,000 verified detections.Let me know if you have any questions. Thanks!
10,000 represents verified detections, which is when two or more people validate the detection in the same way. Unfortunately this number is not currently publicly available, but we are sharing the results on social. In future we will make this number public to make it easier for people to know which verification tasks need completing.
So we are talking here about UNIQUE verified detections?
So if the task has 10 members who has cross verified 5000 images each (leader board showing 50 000 total), this will not be assumed as complete , while all the members verified the same detections, so only 5000 are unique.
Correct?
That is spot on gpsmapper. Unfortunately this number (unique verified detections as you call it) is not shown on the main leaderboard yet, but for now we’ll update everyone on which tasks still need validation.
Going forward we’ll make improvements to the tool that make it better suited to this.