After busting my gut in the last couple of weeks to put out 12 posts in 12 days – and many thanks for the positive feedback, both public and private, that I’ve received for the series – it’s now more than time to look ahead to the fast approaching ScienceOnline 2010 conference. Amongst all the fantastically interesting sessions and discussions on offer, I’m co-chairing a session with Jacqueline Floyd entitled Earth Science, Web 2.0+, and Geospatial Applications.
For people working in geology and other field sciences like oceanography and ecology, the geographic context, and the 2D or even 3D spatial distribution of our data, is often extremely important, and the ability to store, access and visualise this information using web tools is gradually developing. This therefore seems like the ideal time to get a dialogue going about how web and social media tools might be used to distribute, use, and in the latter case even generate, geospatial datasets. I am hoping with this post to get some discussion going here which can feed into the session next weekend, but Jackie and I are also hoping that the conversation will continue in the next few months, with some moves towards realising some of the ideas that seem to be floating around this subject in the geoblogging community (and beyond). We believe that the discussion can be focussed around three questions:
- What’s available? With help from other geobloggers, I’ve been trying to compile a list of currently available geospatial datasets. Even though it’s still far from comprehensive – please feel free to suggest any additions, either on the wave, the wiki or in the comments here – it does show that quite a lot of interesting data available on the web already. However, availability doesn’t necessarily mean usability, so we’d also like your thoughts on the good, the bad and the ugly of the nascent geospatial web. By contrasting good sites where data are easily and intuitively accessed and visualised (something like ClimateWizard springs to mind), against sites where getting the data is still a bit fiddly (such as the USGS earthquake search page, which is a stark contrast to a lot of its offerings), we can try to identify ‘best practices’ which would be useful when developing geospatial resources in the future.
- What would you like to see? What data are not available that could/should be? This could be in terms of specific datasets, or in types of data – should we be geocoding papers, blog posts, photos? In the second case, I have been playing around with this idea, but it’s still a bit ad-hoc. As a subsidiary question, there is also the issue of what other contextual tagging is required in addition to geocoding (e.g., age is just as important in many geological contexts).
- What tools are we missing?. If the geospatial web is going to take off, we need to make more user-friendly tools to encode, access and visualise geospatial data sets, so that people not familiar with the inner workings of web protocols will use them. We need easy searching, perhaps using a graphical map interface where you can select areas you want to search for data in. We need ways of integrating different data sets into the same visualisation (perhaps through broad mash-ups like a Google Earth geology layer or the excellent, if tightly focussed GeoMapApp). And there is also the prospect of the ability to construct new geospatial databases from geotagged data harvested from the web – crowdsourcing efforts like the USGS’s Twitter Earthquake Detector, and the #uksnow twitter aggegator demonstrate two different ways of collecting data with a geospatial component. There’s probably all sorts of other data you could collect in a similar fashion: but how can we make it easy for people to submit data? What I’d like to see is some sort of generic app that makes allows you to add geocoded photos/notes/voice memos/videos/other data such as structural to a specified feed that can then be viewed on Google Maps/Earth – either a personal one for a particular field trip project or a collective one for crowdsourcing projects. Perhaps a bookmarklet for your browser that allows you to geocode things you encounter on your browsing via a map interface.
One potential consideration here is that there may be a difference between using such data for simple sharing, and possibly using it for research. What is required to make web-harvested datasets scientifically useful? For example, how to you determine the accuracy of the location provided? How to you filter out bad or irrelevent data?
I’ll be interested to hear everyone’s thoughts and ideas – remember, even if you can’t make the session, you can still help to make the discussion more fruitful there, and give it some momentum in the aftermath.
Comments (4)