This might be a question for the support team as well, but… here goes nothing:
I was searching through the documentation, but couldn’t find a decisive answer for this question. I am interested to know exactly what sequence of operations does the lookat API function use to decide whether an image is indeed looking at a coordinate point. The answer I am looking for would ideally include a commented code snippet.
However as mentioned in the docs, it’s usually not enough to just have a lookat in the URL, which will pull images all over the world that looks at the point. Instead, lookat is usually used together with spatial filters like bbox or closeto to filter images that look at the point in an area of interest.
Examples:
search for images that are looking at location 12,50 within 100 meters around the location 12,50
/v3/images?lookat=12,50&closeto=12,50&radius=100
search for images that are looking at location 12,50, in the bounding box 0,0,20,20 (between left-bottom 0,0 and right-top 20,20):
Thanks for the reply, but this “use” part of the function documentation is quite clear to me. I successfully use it to query and harvest images via the API. My question relates to the geometry/trigonometry that sits behind the &lookat= call. For reference I provide a sample cURL request as a code block in my R script:
res <-GET(paste0( "https://a.mapillary.com/v3/images/?",
"closeto=",lon,",",lat,
"&radius=",200,
"&start_time=",start_season,
"&end_time=",end_season,
"&lookat=",lon,",",lat,
"&client_id=",clientID))
My question is thus related not to the “how to use”, but rater to “how does it work” semantics. The example snippet I was expecting would be something including SIN/COSINE trigonometric functions to estimate angle between the image and the point being looked at and then comparing this for each image to some threshold angle range, after/before which the point would be deemed “invisible” from the location of the image.
Assume your query is /v3/images?lookat=12,50&bbox=0,0,20,20, it works as follows:
fetch images in the bounding box
for each image in the bounding box
a. calculate the bearing from the image and the lookat location
b. check if the bearing falls in the range between ca - 50º and ca + 50º, where ca means the image’s camera angle.
c. if yes, add this image to the response
Just one thing: when you say “bearing from the image and the lookat location”, do you mean an azimuth angle between the position where the image was taken and the lookat position? If so - what are the units of these angles. I am asking because currently I am trying to work with an azimuth angle held in degrees, where 0 is North, 180 is South, etc. I am questioning whether this measurement scale is applicable to your lookat functionality, or whether you calculate your “bearing” in a local angle?
EDIT:
One more thing though: what is the reasoning behind using the -/+ 50 (exactly) degrees from the car ca? Is it a threshold you empirically arrived at?