How does lookat API function actually work?

Hey Map-heads,

This might be a question for the support team as well, but… here goes nothing:
I was searching through the documentation, but couldn’t find a decisive answer for this question. I am interested to know exactly what sequence of operations does the lookat API function use to decide whether an image is indeed looking at a coordinate point. The answer I am looking for would ideally include a commented code snippet.

Much obliged,

Momo

1 Like

Hi Momo, the parameter you need is lookat=lng,lat. See https://a.mapillary.com/#search-images

However as mentioned in the docs, it’s usually not enough to just have a lookat in the URL, which will pull images all over the world that looks at the point. Instead, lookat is usually used together with spatial filters like bbox or closeto to filter images that look at the point in an area of interest.

Examples:

search for images that are looking at location 12,50 within 100 meters around the location 12,50

/v3/images?lookat=12,50&closeto=12,50&radius=100

search for images that are looking at location 12,50, in the bounding box 0,0,20,20 (between left-bottom 0,0 and right-top 20,20):

/v3/images?lookat=12,50&bbox=0,0,20,20

Regards,
Tao Peng

1 Like

Hello Tao,

Thanks for the reply, but this “use” part of the function documentation is quite clear to me. I successfully use it to query and harvest images via the API. My question relates to the geometry/trigonometry that sits behind the &lookat= call. For reference I provide a sample cURL request as a code block in my R script:

 res <-GET(paste0( "https://a.mapillary.com/v3/images/?",
                      "closeto=",lon,",",lat,
                      "&radius=",200,
                      "&start_time=",start_season,
                      "&end_time=",end_season,
                      "&lookat=",lon,",",lat,
                      "&client_id=",clientID))

My question is thus related not to the “how to use”, but rater to “how does it work” semantics. The example snippet I was expecting would be something including SIN/COSINE trigonometric functions to estimate angle between the image and the point being looked at and then comparing this for each image to some threshold angle range, after/before which the point would be deemed “invisible” from the location of the image.

Do you have an answer for this question?

Regards,

M

Assume your query is /v3/images?lookat=12,50&bbox=0,0,20,20, it works as follows:

  1. fetch images in the bounding box
  2. for each image in the bounding box
    a. calculate the bearing from the image and the lookat location
    b. check if the bearing falls in the range between ca - 50º and ca + 50º, where ca means the image’s camera angle.
    c. if yes, add this image to the response
  3. return images in the response
1 Like

Yes! Thank you, this is what I was looking for.

Just one thing: when you say “bearing from the image and the lookat location”, do you mean an azimuth angle between the position where the image was taken and the lookat position? If so - what are the units of these angles. I am asking because currently I am trying to work with an azimuth angle held in degrees, where 0 is North, 180 is South, etc. I am questioning whether this measurement scale is applicable to your lookat functionality, or whether you calculate your “bearing” in a local angle?

EDIT:
One more thing though: what is the reasoning behind using the -/+ 50 (exactly) degrees from the car ca? Is it a threshold you empirically arrived at?

Regards,
M