Astrometry

Last week I was chatting with Jonathan Fay, the architect of Microsoft’s World Wide Telescope (WWT). Over the years Jonathan has been a great friend to the HD View project.  He provided software early on that helped bootstrap HD View.  He also introduced us to the good folks at Meade Telescope, who provided our camera control platform (see pic1, pic2, video).  HD View has certainly benefited from and taken inspiration from work done in the astronomy community.  So I always pay close attention when Jonathan shows me the latest goings-on in astronomy.  Last week he demoed for me the astrometry flickr group.Image of the Jellyfish Nebula, annotated by Astrometry.net robot

At first glance this seems like any other group on flickr.  However, behind the scenes something pretty amazing is happening.  When a photographer posts an image to this group, a program called the blind astrometry server is put to work.  What this program does is automatically determine what part of the sky the image represents.  It works by extracting features, in this case quadruples of stars, from the image and comparing these against similar features extracted from a whole sky survey.  According to a technical description, this comparison runs in less than a second thanks to sophisticated indexing techniques.

Positioning the image enables some cool new features unique to the astrometry group.  The first is that objects in the picture are automatically tagged.  See these examples of the Jellyfish Nebula and the International Space Station flying over Seattle.  Notice that the notes in these pictures indicating star names were all placed there automatically by the blind astrometry server.  The second is that a link is added to allow the image to be browsed within the World Wide Telescope.  This lets users see the image in the context of the entire night sky.  For further reading, I recommend an interview with Christopher Stumm on the code.flickr blog.

So how close are we to a similar service for terrestrial photos?  Even though the universe is a vast place, surprisingly I think that auto-locating earth-based photos is a harder problem.  First there is far more data.  The portion of the Sloan Digital Survey used for flickr astrometry contains about 1 terabyte of imagery.  Last year the VE team added 36 terabytes of data in just one release.  I’m sure that Google Street View similarly contains many terabytes of data.  The second challenge is that there are more degrees of freedom for positioning an image on earth.  For space there are 4 variables to solve when placing an image, 3 for rotation and 1 for scale.  On earth there are at least 7 variables – 3 for position, 3 for rotation and 1 for focal length, and this ignores other terrestrial factors like variation over time and lighting changes.

These difficult problems are slowly being overcome though.  You already see Microsoft and Google systematically surveying the whole earth in a manner similar to the whole sky survey mentioned above.  The computer vision community has developed fast image recognition techniques for non-sky pictures.  There are tools for manually (not yet automatically) placing images within Google Earth or Virtual Earth 3D.  And, the Photo Tourism project (which became Photosynth) demonstrated auto-annotation of flickr images.  Over the coming years I think that we’ll see these technologies packaged up in an easy-to-use web service.

Matt

This entry was posted in Uncategorized. Bookmark the permalink.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out / Change )

Twitter picture

You are commenting using your Twitter account. Log Out / Change )

Facebook photo

You are commenting using your Facebook account. Log Out / Change )

Google+ photo

You are commenting using your Google+ account. Log Out / Change )

Connecting to %s