Openstreetcam uses a not-so-good algorithm to ‘match’ your images to roads (and then colors the full road). But it does not work correctly with bicycle tracks and near places without any roads. Its not good enough for me.
It’s not perfect. But Mapillary definitely needs a solution for overcrowded maps. To avoid things like this: https://www.mapillary.com/app/?focus=map&lat=48.881485000000005&lng=2.2705849999999996&z=17&menu=false&pKey=w7PWtU-0W4eKmpf5qUpB-A&tab=uploads Good luck finding the most recent image there. Or the one you look.
Going slightly of topic (but I don’t mind, this far to interesting)
If they reposition The ‘green dots’ to the SfM position, I expect the lines will converge quite a bit… then the ‘only thing’ one would need to do is make it much easier to see/select the available image for that position.
This is only partially important for my project, since the number of (selected) users there will be quite minimal as to the overall Mapillary… but I have thought about this too; first when an image has been selected the in-image arrows should primarily remain within that user. When not possible use the current system, then stick to that user… also a sort of time-scale should be available on multiple imagery on one location. This because in my opinion ‘time’ is the primary variable in this case.
AD: there is a global search tool in the std. Mapillary app, where you can search for a time and/or a user…
Back to the subject, couldn’t we All just work together? Couldn’t Mapillary create an opensource System where the basic functions exist in a ‘node kind of way’, where SfM, uploading, management and database functions exist… and a publish function to Mapillary where the ‘selectively blurred images’ are published?
- software get’s updated also by the community
- stability for the future
I must be missing other advantages (and needed functions) but, for now, it’s the idea that counts…
I found this project that has some SfM (open source but its not clear the license)
Another project, full open source, very simple and designed for walking trails:
And other related to walking trails:
For bluring faces and license plates:
In this mailing list thread there are people talking about build a open source street level imagery:
Thanks for the links, I’ll look into that, especially your last one.
The darn thing is time, of which I have way to little this moment…
Most recent images:
I’m the founder of Trek View. I have also been working with Nick Whitelegg on OTV, also linked in @juanman’s post.
Our joint focus is on off-road 360 imagery and we have some plans for the next few months to ship some new features to our respective open-source code.
That said, for the most part, the underlying tech is similar, regardless of what is being captured (roads / paths / etc).
We use a lot of tools that already exist in some open-source form (e.g. ffmpeg for processing video files, exiftool for metadata, Pannellum for 360 viewing, opensfm for image transitions, etc.).
For us, the end destination after this processing is Mapillary and Open Trail View.
I am still a firm supporter (still) of Mapillary. They have done a lot for the industry (and continue to do so with integration into OSM). Don’t get me wrong, I get the anti-FB sentiment, but the sad truth is, hosting a platform like Mapillary is very expensive. For large tech companies spending 100k EUR / mo (or more) on hosting is nothing, for a private company this is a lot of money. The larger you are, the more it costs, which can make growing harder and harder.
It would be great to have a much clearer statement about Mapillary and FB’s intentions. I also know, having worked at companies that have been acquired, the road map is not always very clear. In many cases the acquired company/product disappears entirely within a year or two (this is my biggest fear)
All being said, I am all for working together on improved solutions (with openess and privacy at the core). I know many in the OSM community are having similar discussions around similar points to those raised in this thread.
How about we take this discussion away from Mapillary and into a public forum elsewhere (with the OSM community)? I’ll set this up today (probably a Slack team) – if you are interested in joining the discussion (all welcome, even if you’re not technical), please email me and I will send out an invite: email@example.com).
To follow up Dave’s post, I am the developer of OpenTrailView (OTV). This project is still in early stages but is likely to see significant further development in coming months.
OTV has a more specific focus than Mapillary, namely walking trails and 360 panoramas only, but the underlying software could potentially be used as a starting point to develop an open source street viewer tool.
OTV deals with the problem of connecting panoramas together using underlying OpenStreetMap data: nearby panoramas within a certain distance of the current panorama are found, and then routes calculated to each using GeoJSON Path Finder (https://github.com/perliedman/geojson-path-finder) with OSM GeoJSON data.
Each route is associated with an OSM way leading from the current panorama (by grouping initial bearings to each) and the nearest panorama along each OSM way is selected.
Pannellum is used to display the panoramas.
The Gitlab repo is linked above, alternatively if anyone is interested in more details I can be contacted on firstname.lastname@example.org.
Hi Nick. Does your system have any “smart geo position correcting”?
Hi eesger, do you mean ‘snapping’ the panoramas to the OpenStreetMap data? If so, it does do that, yes: firstly, panoramas are inserted into OSM ways (to ensure the panos are on the routing graph), and, once the graph is generated, panos are further ‘snapped’ to junctions within 5 metres (I’m going to experiment with 10 metres as I sometimes get unsatisfactory results). This ensures that a panorama which is supposed to be on a junction is indeed on the junction so that the connectivity is correct.
hi, I tried your tool yesterday. Uploaded 10 images and they were nicely snapped on the pedestrian road. Interested in the algorithm: does it use a shortest path calculation to find feasible OSM ways to use for each of a string of images, or only ‘dumb’ snapping to the closest roadway?
I did a little test also, I’m not getting it
(I think I am expecting something different somewhere, somehow)
That “Wide Sight” looks seriously promising! (License: I get the impression that the developer mostly wants his name mentioned when you use his software ;))
The main benefit of SfM (in my opinion) is that it “stabilizes” images (pitch/yaw) and that it corrects GPS location (lat/lon) based upon image recognition (example: SfM sees sign A on image B and image C (etc.) and calculates that the yaw & pitch needs little of this and that the latitude & longitude needs a little of that). This automates and perfects date to a great extent. And most of all it improves the “walking around” experience! (check that little film I shared earlier)
Yes, I saw those as I moderated them yesterday. It does use a shortest path calculation… firstly, it finds all panoramas within a certain distance of the current one (500 metres I think, but I need to check) and then calculates a route (Dijkstra) to each. This is done with GeoJSON Path Finder which I mentioned in my original message. OSM data is downloaded from the server as GeoJSON and then converted to a graph using GeoJSON Path Finder.
If multiple panoramas are on the same route (have the same bearing) from the original, then the closest one is selected, and joined to the current panorama.
You will need to rotate your panoramas using the map interface so that the camera icon is pointng in the same diretion as the centre of the panorama.
Hi @eesger what did you do? There were some new panoramas uploaded this morning, e.g
Either of those look like yours?
Just to update: I’ve had two test uploads in the past 24 hours with the panos very close to each other, which have some issues with connectivity (generally, all panos submitted up to this point were at least 10 metres apart so the problem hasn’t arisen until now). I will need to investigate this bug - apologies for this. It should be easily resolved.
1340 is me. And my test were images quite close to each other (I was curious as hout the “pointers” would show on the pano)
See message above, close-together panos are causing some issues but should be straightforward to resolve, likely to be an artefact of snapping to the OSM way.
OK, I seem to have fixed this. Close-together panoramas are now linked correctly, at least the two sets uploaded today (IDs between 1303 and 1340).
The issue was actually not to do with the fact they were close together. It was that panoramas further than 5 metres from an OSM way were not being added to the routing graph. I have increased the limit to 10 metres and it’s now fine. @eesger @micmin1972 You will also need to rotate your panoramas using the map interface to ensure that the bearing (heading) of the camera icon corresponds to the direction of the bearing centre point of the image , as currently the link arrow is in the wrong place on the panorama.