In the recent past, there have been significant advances in automated tools for extracting place information from news articles and other text media. This led to a wave of map mashups that allowed for news stories to be browsed using a map. Since those earlier efforts, social media data sources have become ubiquitous, and while similar methods can be used to extract and represent places mentioned in social media reports like Tweets, there are also a lot of challenges we have yet to overcome to make these datasets truly useful in a crisis situation.
A few of the challenges associated with mapping information from social media are:
Here at Penn State, we've been engaged in research to develop new tools for foraging through and visualizing geographic information coming from social media reports. The SensePlace2 [1] project harvests tweets that include disaster-related keywords. From these tweets, we then extract place names and geocode them (along with other named entities, such as people, organizations, and resources). Also check out the recently released SensePlace3 [2].
Because so many social media sources now feature API access to their data feeds, new map mashups are now possible that can integrate multiple forms of social media with other geospatial data. Esri maintains a few so-called "Public Information Maps" that show current weather mashed-up with social media streams. The example below is just one of them. If you click the "Social" button at the upper-right of the public information map and login to your Twitter account, you can have it show what it believes are relevant tweets. This uses the location feature that some (very few, it turns out [3]) enable on their devices when they use Twitter.
Links
[1] https://www.geovista.psu.edu/SensePlace2/
[2] https://www.geovista.psu.edu/SensePlace3/lite/
[3] http://firstmonday.org/ojs/index.php/fm/article/view/4366/3654#p6
[4] http://www.esri.com/services/disaster-response/severe-weather/latest-news-map