Home > Information technology essays > Social Media in Crisis: A Literature review

Essay: Social Media in Crisis: A Literature review

Essay details and download:

  • Subject area(s): Information technology essays
  • Reading time: 22 minutes
  • Price: Free download
  • Published: 7 March 2018*
  • Last Modified: 23 July 2024
  • File format: Text
  • Words: 6,391 (approx)
  • Number of pages: 26 (approx)
  • Tags: Social media essays

Text preview of this essay:

This page of the essay has 6,391 words.

Recent technological advancements has made technology (devices, internet and information) available to a large population across the globe. In this literature review, we will investigate the relevance of social media during crisis. In particular, we will look at how existing technologies are being exploited by various stakeholders (victims, NGOs, government etc.) to navigate, gather information, help and recover from these critical circumstances (earthquakes, flood, riots, mass-shooting etc.). We will also investigate the available tools and systems that help responders to get help to the needy and how the interaction is done. A focus will also be given on how the user generated data are represented and used to mitigate the crisis?

Social Media as Crisis Platform: The Future of Community Maps/Crisis Maps

In this paper, the authors point to some of the first instances where social media was used a platform to exchange information and request help. The people turned to twitter during the 2008 Mumbai attack in India to trade information. Similar case was observed during the 2010 earthquake in Haiti where a non-profit organization Ushahidi.org set up operations and centers to translate messages on a crisis map and respond to those messages. Next, authors point out some of the problems with using social media as a platform during crisis. For example: twitter was used by criminals to tweet police activities to demonstrators at G20 protests in Pittsburgh. Although social media like twitter have a potential to be used as a platform for crisis management but the problem is that they can be difficult to use, inefficient and incoherent. Thus, the new crisis media platforms must develop the capacity to create order out of chaos to overcome this kludge.

The problem is amplified for the first responders. The torrent of information coming from sources like facebook and twitter is difficult to navigate and there are no methods to validate and verify the source of information. The 1:10:89 rule by Jeff Howe [2008] says that one percent of the population create the content, ten percent will validate (or vote) on content, and the rest will just consume the content (in most crowdsourcing models). The authors introduce ‘crisis maps’ that are being used by single organizations for operational planning purposes. An ideal crisis map would not only allow organizations to share information, but also to collaborate, plan, and execute shared missions. The crisis map can also be used at times other than crisis as a source of information like a normal map etc. Once people get used to community map for normal events and information, the flip to crisis mode becomes a natural extension of how they get community information. One reason why twitter was used as a crisis media was because of the familiarity of the system so in order for crisis maps to work they should be easily understandable and usable without a lot of requirements. Another factor that plays an important role in the development of community map/crisis map is ‘culture’. Different culture will have different attitude towards official authorities like police, their willingness to trust them, cooperate with them etc. Thus a crisis map and how that will be used at one part of the world will not be the same other places. The paper also reports some of the challenges that will be faced by these crisis maps like: spoofing, identification of audience and what data they need, scalability etc. In the end the paper calls for multidisciplinary research to bridge the gap between society and social media.

Assessment:

The paper is heavily cited and was a reference in most of the papers in this review. This paper is useful as it introduces the reader to a number of non-traditional social media platforms and how they are working in developing countries. It provides an insight on how the first responders react to the crisis and what information they are seeking which is crucial in this research.

Reflection:

This paper provided a good starting point to the topic. It introduced the idea of ‘crisis map’ as a substitute for current digital media(Twitter, Facebook) for more effective communication during crisis. It also provided a detailed example of an open-source crisis map Ushahidi being used in developing world to report corruption or daily problems faced by citizens. These examples provide direct insight on how people interact with social media in threatening situations. And how a crisis map can be used as a community map under normal circumstances.

Processing Social Media Messages in Mass Emergency: A Survey

In this article, the authors surveyed the methods for studying disaster from the perspective of information processing and management, specially how the data gathered from social networks can be analysed and interpreted. The article provides researchers and developers with computational methods to create tools for formal response agencies, humanitarian organizations giving them a way to successfully identify, filter and organize the overwhelming amount of social media data that are produced during crisis. The authors point to the Indian Ocean Tsunami disaster of december 2004 as the first time when an electronic board was set up and moderated for 10 days in which user generated content was used to respond to the crisis. Similarly, after Hurricane Katrina which struck New Orleans in United States in 2005, significant emergency response activity took place on MySpace. One of the earliest instances where twitter was used as a crisis platform was during the 2007 wildfire that took place in San Diego, California.

One of the major difficulties faced by these platforms for response crisis is difficulty in evaluating the veracity, trustworthiness, reliability of information and information overload in general. Extracting time-critical information from social media that is useful for emergency responders, affected communities and other concerned populations in disaster situations. Unlike as the Hollywood portrays, human response to crisis is not that of panic and mayhem. The victims of disaster do not rush or loose control but often these ‘first responders’ rush to the scene to perform search and rescue. Depending upon the circumstances of disaster, the roles and duties of different stakeholders and their need of information will vary. The information any individual, organization or group finds useful will depend on their goals.

Next, the article mentions some of the existing systems for crisis-related social media monitoring. Most of these systems are built around the concept of having a dashboard that provide a summary of social media during disaster according to temporal, spatial and thematic aspects. The data is acquired via Application Programming Interface (APIs) provided by social media platforms. The rest of the paper discusses how the various available natural language processing and machine learning tools can be used to gather, process and make sense of the voluminous data.

Assessment:

The paper makes a strong case about the need of technical methods for large scale data processing of the voluminous data generated during crisis. Although, the paper is heavily cited it is of little help for this literature review as most of the paper talks about the technical aspect of data processing and representation. It does however, provide an insight on what are the needs of stakeholders during crisis which is of great value.

Reflection:

The papers provides an in depth knowledge about the available tools and technologies to summarize and interpret the data like various machine learning and natural language processing techniques (Feature extra
ction, Deduplication, Filtering, etc.). The technical as
pects are not so important and relevant to my research but it does provide a list of tasks that are done before the data are further processed. The processing of data can provide an insight on how the users write messages during crisis and if it is any different under normal circumstances. The paper will greatly appeal to researchers trying to find novel methods for processing large amount of user generated data.

Social Haystack: Dynamic Quality Assessment of Citizen-Generated Content during Emergencies

In this paper the authors conduct an empirical study to explore the emergency service’s attitude towards citizen generated content, to understand how social media is being used in current work practices and what is the quality of content that is gathered from such sources. They introduce a web-based application ‘Social haystack’ to provide the relevant high-quality information support. During emergency, decision makers need to find a compromise between urgency and accuracy of data. The decision made in such crisis can help or worsen the situation thus, there is a need for the current and most accurate data sets.

Empirical study: During a normal working day observations were conducted (for 6 hours) at a major cultural event with over 40,000 visitors. Besides observation 5 other interorganizational group discussion were conducted in workshops to better understand the practices of interorganizational crisis management. Further 38 individual interviews were conducted with actors from participating organizations. The actors ranged from a wide variety of hierarchy from a lower level (Head of Section) to a higher level (Head of Control Center). Interviews and group discussions were audio recorded and transcribed for later analysis. The text modules were then divided in seven categories: technology usage, situation illustration and construction, information quality, quantity and trustworthiness, communication practices, cooperation and collaboration, debriefing and learning and citizen involvement.

Next, the authors selected five criteria for quality measurement of citizen-generated data: link, credibility, up-to-datedness, dissemination and quality of coordinates. The authors then generated a formula for quality score for a user generated content using the quality criterias and the degree to which those criterias are fulfilled. To assess the quality of the content a service-oriented client-server application was created using HTMl, CSS, JavaScript, JQuery for client side, .Net for server side and MySql for database. The system relevant returns results based on search terms given by user as input. 20 users evaluated the system to test both the usability and practice relevance of the system i.e. how the application will be used and the difficulties that the users might encounter while using it. The application was evaluated on the quality measurements identified earlier. Based on the insight from this experiment the authors provide some guidelines to HCI researchers working to handle citizen generated data from social media.

Assessment:

The study was conducted in a controlled environment. Thus, the results may not accurately represent a critical situation. The evaluation included 20 users (students and professionals) neither of which are accurate representation of the intended user.

Reflection:

The paper is a good source of information as it provides an in depth analysis of how the user generated data looks like and what are the characteristics of a quality data. The paper will help identify varying quality of information and how to group them.

The Emergence of Online Widescale Interaction in Unexpected Events: Assistance, Alliance & Retreat

In this paper the authors examine a group on a popular social networking site as a virtual destination in the aftermath of the Northern Illinois University (NIU) shootings of February 14, 2008 in relation to related activity that happened in response to the Virginia Tech (VT) tragedy 10 months earlier. They investigate the features that gets activated when a large amount of people cover such conditions. People employ Innovative ideas during disaster situations to help others, offer support, and ameliorate the severity of the crisis. They use social networking and media sites, blogs, photo-sharing and similar forums around the world to participate in the processes of crisis recovery.

In 2007, 32 people were killed and many others injured in a two and half hour mass shooting in VT while in 2008 a shooter killed 5 students and injured others before killing himself in NIU. To capture the ephemeral data that quickly disappears during times of rapid social change two members of the research team traveled to Blacksburg, VA five days after the event to conduct interviews that focused on information generation and dissemination activities by the university students on the day of the shootings while the home team monitored a number of online sites. In both cases, the selection of social networking site (SNS) groups for discourse analysis was formulated around indications of high activity

In case of NIU a group, First NIU, sprung immediately in order to provide support to NIU. The researchers took note of membership size, number and nature of wall posts, member profile information, and discussion topics of this group. Within 8 hours the membership grew to 24,000 and after one day the 50,000 people were part of this group. VT participation in First NIU could be noticed as they frequently expressed support, offering advice, and noted the connection between the two universities.

The authors describe central aspects of the post-emergency self-organizing activity in this environment of widescale interaction: Sensemaking, Alliance (empathy, caretaking, guidance & instruction), Retreat to gain control in a public setting. Participants are aware of the highly public and accessible nature of SNS. That is why so many turned to and used SNS as a communication medium in the aftermath of the shooting. However, participants were also aware of the precarious position they were in by using SNS to relay and garner information.

This research brings attention to the way social networking sites, as virtual destinations that create the opportunity for widescale interaction, support crisis-related response, both for people more directly affected by and seeking information about the emergency, and for peripheral participants attempting to make sense of the newsworthy event.

Reflect: This paper is most relevant in this research. It provides good example how the social sites have been used during man-made crisis (mass shooting) and how the reaction of public differ from other disasters.

Relief Work after the 2010 Haiti Earthquake: Leadership in an Online Resource Coordination Network

Following the Haiti Earthquake on January 12, 2010, vast resources were dedicated for the relief effort by US Navy at the affected location. They coordinated with non-governmental organizations (NGOs) participating in the relief effort and used an online discussion forum for communication. The article claims that most site activities are broadcast oriented and do not result in discussion, but the small percentage of cases where discussion emerges, participants are focused on the exchange of medical, Global Information Systems (GIS) and equipment on the ground oriented information. In time of disaster, information seems to flow in three directions – from authorities to the public, from the public to authorities and from peer-to-peer. The paper focus on the role of government organi
zed information and communications technologies (ICTs) designed to support
information and coordination that leverages substantial government infrastructure; in this case, the US Navy’s medical, equipment and GIS information capacity.

Information communicated during disaster situations is diverse in its mode, nature and trustworthiness. Beyond simply sharing personal, situational or coordination information, people use online communities during disasters for interpersonal and emotional support. ICTs are crucial for bridging the gap between authority and information in a crisis. Many experts in the disaster relief field are advocating for the widespread adoption of a global information network like ReliefWeb, which was developed by the UN Department of Humanitarian Affairs in 1996 which is a central repository for maps, press releases, field reports, and important forms. The research questions addressed in the paper are 1) What kinds of information and coordination occurred on the US Navy sponsored APAN site during the Haiti crisis; 2) How does coordination and information exchange change over time and 3) To what extent do mediation and coordination activities (invisible brokerage) by forum members become visible through electronic trace data. They performed network analysis and grounded theory analysis of the electronic trace data from the APAN site. There were 5,606 total discussion threads which included a number of duplicates.

To answer the first question the team focused on content analysis of topics on the 228 (4%) initial posts that generated a response. The most prevalent uses of the forum related to the unique resources available from the US Navy’s extensive medical, transportation and GIS data related resources both at sea and on the ground. To answer research questions two and three they incorporated network analysis of read and post behavior across all threads. They included 23 weeks of data from the forum to demonstrate differences between read and post behavior.

Assessment:

The paper explicitly talks about the role of NGO sponsored coordination forums and the dynamics of these forums and how leadership emerges in these forums. It argues in a very narrow scope which might not work in a more open platforms like: twitter where people are both welcoming and hostile.

Reflection:

This paper will help shape some arguments about how forums used and regulated by NGOs and government differ from others like twitter and facebook. The paper will also provide baseline to categorize the users based on their activity and involvement.

Online public communications by police & fire services during the 2012 Hurricane Sandy

In this paper the authors report on the online communications from all the coastal fire and police departments within a 100 mile radius of Hurricane Sandy’s US landfall. They collected data from 840 fire and police departments to empirically study how widespread online media use is for emergency public information communication, and the nature of use of online media. The data was collected from a specified geographical boundary that included those hardest hit by the storm, with a scope that allowed for analytical breadth this made for a total of 26 counties located across 5 US states which were within a 100 mile radius of where Sandy made landfall as the target. Next, they identified all fire and police departments within the 26 counties from the National Fire Department Census Database. The police departments exist at three levels: state, county, and municipality. Each of the 5 states has a state police department, and each of the 26 counties has a sheriff’s office. A total of 272 police departments were identified. For each fire and police department, they looked at four online communication media: a website, a subscriber-based notification service (Nixle), a microblogging service (Twitter), and a social networking service (Facebook).

They found 128 Nixle accounts and extracted the post information for each of these accounts using web-scraping methods. The Fire & Police Nixle Collection contains 930 posts. They identified 114 Twitter accounts and retrieved the full message streams for each of these accounts using the Twitter REST API. In the end the Fire & Police Sandy Tweet Collection had 3033 tweets. They also identified 556 public Facebook accounts and retrieved the full set of posts for each of these accounts using the Facebook Graph API. The Fire & Police Sandy Facebook Collection had 4652 posts. Surprisingly, only 9% of on-topic tweets in the Fire & Police Tweet Collection contain hashtags related to Hurricane Sandy (e.g. #Sandy, #frankenstorm, #HurricaneSandy) and only 10% of the departments that used Twitter replied directly to Sandy-specific tweets. Comparing the activity of people on these platforms it was found that 90% were Inactive, 6% were Non-Sandy Active, and only 4% were Sandy Active on Nixle whereas engagement with Twitter was only slightly higher, with 89% Inactive, 4% Non-Sandy Active, and 7% Sandy Active. The highest levels of online engagement were found on Facebook, with 52% Inactive, 23% Non-Sandy Active, and 25% Sandy Active. Different departments generated different number of tweets and had different levels of visibility. Some departments generated far more tweets than other accounts and responded to members of the public more as well. The disproportionately high level of public engagement found in some Twitter account seems to be due to several factors: large audience, power and phone outages, as well as a large neighborhood.

Although relatively few of these departments used online media in their public communications during this event, and that there was a high degree of variance across the different media under study. Among those departments that used online media during Hurricane Sandy, discursive moves signal creative adaptations and set meaningful precedents for the future of emergency management.

Assessment:

The paper talks explicitly about the response (or lack of) from the fire and police departments under critical circumstances. However, it does not take into account the constrained conditions (lack of police officers etc.) and chaos under which these departments work especially, during a large scale disaster.

Reflection:

The paper will help make reasonable assumptions of the attitude of police and fire department when it comes to engage with social media for information gathering or spreading. It also provides us with information on how several factors (like:audience, power and phone outages, location) affect information generation and propagation.

Introduction: Social Media and Collaborative Systems for Crisis Management

In this article the authors gather and summarize a set of empirical studies of the design and use of these technological advances to support collaboration in crisis management and response with implications for the design of future systems for crisis management. A disaster is defined by the United Nations (UN) as a serious disruption of the functioning of a society, and a catastrophe refers to disasters causing such widespread human, material, or environmental losses that they exceed the ability of the affected part of society to cope adequately using only its own resources. Both disasters and catastrophes create a crisis situation. There are at least four phases of the emergency management process: mitigation, preparedness, response (also called emergency management), and recovery.

Tsunami Warning Systems (TWS) are an example of what the authors term “high reliability virtual organizations.” and like other high reliability
organizations, the consequences of failure are very severe, and reliability
and safety are primary concerns.The TWS case study describes problems with adaptation of the technology as intended, based on issues of lack of interoperability and HCI weaknesses. The messages generated by these TWS often violate some basic HCI guidelines, such as clarity, consistency, and communicating in the user’s language.

The five main themes relating to barriers to interoperability as identified by Kwon and his colleagues based on semi-structured interviews with members of two public safety organizations at Virginia Tech and coded from the interviews are: information sharing, communication readiness, operational awareness, adaptiveness, and coupledness.

There are a number of collaboration issues within the context of crisis and emergency management. Both knowledge-sharing and activity awareness become essential in crisis planning processes that involve dealing with multiple streams of multi-perspective data and require people with different roles and backgrounds to make collaborative decisions. The article refers to three different experiments involving a paper-prototype in a collocated work setting, a first software prototype in a distributed setting, and a second, enhanced software prototype in a distributed setting. The ability to coordinate the action of the members of a response team is basic in collaborative crisis response. To provide a quick and adequate response, team members have to synchronize their activity while distributed across space.

Next, the authors identify some of the challenges for the future. One major being integrating the information during disasters from citizens, using social media, with that of official responders, disseminating messages through channels such as television, radio, SMS, and Internet Web sites. Determining the trustworthiness of the information is another major challenge that the first responders have to face. The technical but also social inter-operability of information systems and organizations is a major challenge as well.

Reflection:

The paper defines disaster management process and the steps involved in it. It also identifies the barriers which prevent interoperability of systems.

Emergency Situation Awareness from Twitter for Crisis Management

In this paper the authors describe the ongoing work to detect, assess, summarise, and report messages of interest for crisis coordination published by Twitter. The developed platform and client tools, collectively termed as Emergency Situation Awareness – Automated Web Text Mining (ESA-AWTM) system, demonstrate how relevant Twitter messages can be identified and utilised to inform the situation awareness of a crisis over time. These tools have recently been deployed in a trial for use by crisis coordinators.

Crisis coordinators need tools and services that mine social media to:

Detect unexpected or unusual incidents, possibly ahead of official communications.

Condense and summarise messages about an incident without having to read individual messages.

Classify and review high-value messages during an incident (e.g. messages describing infrastructure damage or cries for help).

Identify, track, and manage issues within an incident as they arise, develop, and conclude; pro-actively identify and manage issues that may last for hours, days or weeks.

Perform forensic analysis of incidents by analysing social media content over the lifetime of crisis and beyond.

The authors have tried to address these five requirements in their proposed tool. The tweets of interest for The Australian Government Crisis Coordination Centre (CCC) are obtained using the Twitter search API by defining a location and search radius to cover most of Australia and New Zealand. A subset of the tools have been deployed for trial by the Media and Crisis Communication team within the Strategic Communication Branch (SCB) of the Australian Government Attorney-General’s Department. The interface proved to be very responsive with results for each selected alert displaying in under a second. By re-engineering the client tool to retrieve and display tweet content directly from Twitter only tweets from the last six days are available however there is no such limitation when using the tweet repository.

Although this information will not replace existing procedures and information sources, but it can provide a new source of data that has many potential applications within emergency management and crisis coordination. Social media can also play a role in providing: evidence of pre-incident activity; near-real-time notification of an incident occurring; first-hand reports of incident impacts; and gauging the community response to an emergency warning.

Reflection:

The paper identifies the features a tool or service that mines the social media should possess. It identifies the essential qualities that a good crisis management system should possess.

The Case for Readability of Crisis Communications in Social Media

Emergency management specialists, crisis response practitioners, and scholars have long recognized that clear communication is essential during crises. The work presented in this paper studies the readability of crisis communications posted on Twitter – by governments, non-governmental organizations, and mainstream media. The data analyzed is comprised of hundreds of tweets posted during 15 different crises in English-speaking countries, which happened between 2012 and 2013. Then they describe factors which negatively affect comprehension, and consider how understanding can be improved. The authors define readability as the ease with which a written text can be read or understood by a reader. Readability is different from “reading ability”, which corresponds to the reading skills of the reader, and also differs from “legibility”, which is concerned with the physical characteristics of a text (font, spacing, and text position on the sheet/screen).

For the experiment, they used the collection from Olteanu et al., CrisisLexT26. This is a freely available collection of tweets from 26 crisis events, which happened in 2012 and 2013, with about 1,000 tweets per crisis, labeled for “informativeness” (In- formative or Noninformative), “information type” (Affected individuals, Infrastructure and utilities, Donations and volunteering, Caution and advice, Sympathy and emotional support, Other useful information), and “source” (Eyewitness, Government, NGOs, Business, Media, Outsiders). The tweets selected were from countries with large native english speakers and the sources were either NGOs, government, or media. These tweets were annotated with the help of crowd per degree of simplicity.

A manual analysis of these tweets showed that some of the tweets labeled “unclear” were written in a mixture of languages. They also noted that tweets that are problematic to read tend to include more acronyms (almost double the amount of them), and more user mentions. Tweets that are considered problematic to read include more hashtags, especially more hashtags placed at the beginning of the tweet. From these observations, it is clear that hashtags placed at the beginning of the tweet are impairing the readability of crisis tweets.

Based on these observations the authors make recommendations on how to write tweets during crisis events:

Message length:

Include a maximum of 1 or 2 main points per tweet.

Write brief, concise sentences.

Remove superfluous words.

Wri
te fully-formed sentences; avoid writing incomplete thoughts, or incomplete messages.

< br /> Vocabulary:

Use only simple and familiar words.

Use abbreviations and acronyms with care.

Twitter-specific elements:

Place all hashtags at the end of the tweet and do not write more than 2 hashtags.

Avoid mentions (e.g. “@user”).

Assessment:

The results are not generic but applies only to twitter. Using these recommendations may not have the same effect in other social media. The tweets were also annotated manually which means there could be some bias.

Reflection: This paper was very informative. It outlines the features of a good tweet which gets retweeted many times and the features of others which fail to propagate. It will help define characteristics of good user information and help generate recommendations on how to write good messages.

Harnessing the Crowdsourcing Power of Social Media for Disaster Relief

This article briefly describes the advantages and disadvantages of crowdsourcing applications applied to disaster relief coordination. It also discusses several challenges that must be addressed to make crowdsourcing a useful tool that can effectively facilitate the relief progress in coordination, accuracy, and security. Crowdsourcing allows capable crowds to participate in various tasks, from simply “validating” a piece of information or photograph as worthwhile to complicated editing and management, such as those found in virtual communities that provide information—from Wikipedia to Digg. This is a form of collective wisdom information sharing that strongly leverages participatory social media services and tools.

Advantages: First, crowdsourced data including user requests and status reports are collected almost immediately after a disaster using social media. The large amount of nearly real-time reports allows relief organizations to identify and respond to urgent cases in time. Second, crowdsourcing tools can collect data from emails, forms, tweets, and other unstructured methods and then do rudimentary analysis and summaries, such as by creating tag clouds, trends, and other lters. Third, providers can include geo-tag information for messages sent from some platforms (such as Twitter) and devices (including handheld smart phones). Such crowd- sourced data can help relief organizations accurately locate specific requests for help.

Disadvantages: Current applications do not provide a common mechanism specifically designed for collaboration and coordination between disparate relief organizations. Data from crowdsourcing applications, while useful, do not always provide all the right information needed for disaster relief efforts. Also the current crowdsourcing applications do not have adequate security features for relief organizations and relief operations. For example, crowdsourcing applications that are publicly available for reporting are also publicly available for viewing.

Challenges:

Geo-tag Determination

Report Verification

Automated Report Summarization

Spatial-Temporal Mining for Social Behavior Prediction

Scalability and Safety

Assessment:

This paper has one of the highest citation counts in this list. It is an introductory paper that introduces the crowdsourcing power of social media.

Reflection:

The paper highlights the various advantages and shortfalls for crowdsourcing user generated information. It establishes the need for crowdsourcing user generated data and provides a list of challenges faced by researchers in this area and gives a starting point for further research.

Real-Time Crisis Mapping of Natural Disasters Using Social Media

The authors have developed a real-time crisis-mapping platform capable of geoparsing tweet content. Their approach exploits readily available location information from gazetteers, street maps, and volunteered geographic information (VGI) sources. Their goal is to improve geoparsing precision of street-level tweet incident reports and empirically quantify the accuracy of the resulting social media crisis maps during natural disaster events. Such results can help disaster management agencies assess the value of social media crisis mapping. Current real-time geospatial information systems (GIS) mostly map social media microblog reports using geotag metadata with longitude/latitude coordinates. According to the US Geological Survey (USGS), the main benefits of Twitter-based detection systems over sensor-based systems are their fast detection speed and low cost. Social media GIS systems can be combined with conventional GIS systems deploying hardware-based sensors, such as in situ seismic sensors or remote sensing aerial photography and satellite imaging to build a coherent situation assessment picture, and present it to emergency responders, civil protection authorities, and the general public to help coordinate response efforts and improve overall awareness. Their system differs from existing crisis-mapping approaches in that they geoparse tweet text in real time rather than only using the tweet’s geotag. Thus, they can access all crawled tweets as opposed to only 1% of tweets that contain geotag metadata.

Before each location is displayed on the crisis map they calculate a statistical baseline for each location to allow them to compute a threshold level for tweet mentions. The hypothesis behind this assumption is that locations mentioned many times in a sample window are more likely to be coherent and credible disaster-related location reports than those with only one or two mentions. Ultimately, this threshold value will be tailored to suit each crisis-management control room, reflecting the error tolerance of the nal decision makers.

They conducted two case studies to evaluate the quality of our tweet maps. The first event studied was Hurricane Sandy (October 2012) and the second event was the May 2013 tornado that devastated the town of Moore, south of Oklahoma. They compared each map to a ground truth storm-surge map from the official post-event impact assessment produced by the US NGA. As expected, when the mapping threshold was increased, the map precision increased but at the expense of recall. These case studies also shows that crisis maps generated from social media data can compare well to gold-standard post-event impact assessments from national civil protection authorities. Both the case studies demonstrate that it’s possible to obtain high-precision (90 percent or higher) geoparsing from real-time Twitter data by exploiting large databases of preloaded location information for at-risk areas.

Assessment:

The paper has good number of citations. But, even though the paper is dealing with user sensitive data like location the paper does not mention privacy. It does not even acknowledge a potential privacy threat while handling sensitive data.

Reflection:

The paper focuses on location based parsing of data and provide ways to get location based information even if the tweets are not geotagged. The paper can be helpful to argue the relevance of location and context in which tweets are generated.

Spatial Computing and Social Media in the Context of Disaster Management

In this paper the authors describe how citizens’ participatory sensing coupled with social media can enable effective and timely information sharing for situational awareness and informed decision making. In this environment of diverse and powerful smart devices the spatial comput
ing devices generate a rich and diverse content. Such sources include social media feeds, blogs, maps and
GIS systems, digital libraries, e-government portals, television, and newsfeeds. A recent survey by the American Red Cross indicates that close to 80 percent of survey participants expect emergency response organizations to monitor social sites during a disaster, more than 30 percent expect to receive help within one hour from posting a request to social sites, and 24 to 30 percent use social media to report their well-being to their loved ones. In lines with these requirements the US Department of Homeland Security’s Science & Technology Directorate (DHS-S&T) has initiated Social Media Alert and Response to Threats to Citizens (SMART-C) which aims to develop citizens’ participatory sensing capabilities for decision support throughout the disaster life cycle via a multitude of devices (such as smartphones) and modalities (MMS messages, Web portals, blogs, tweets, and so on).

The combination of spatial computing and social media can present some unique challenges. The geospatial data retrieved from smartphones, sensors, and other devices often contains sensitive personal information. When combined with social media data, this information raises concerns about increased privacy breaches. Such privacy concerns must be addressed in all phases of spatial computing, including data collection, storage, analysis, and dissemination. Detecting and characterizing events from unstructured multimodal data is a key challenge in the spatial computing environment given the large number of sources producing volumes of multimodal data. When viewed in isolation, data from different sources can appear irrelevant, but when analyzed collectively, it could reveal interesting events. A key issue when using social media data is how reliable and accurate that data is, given that it’s often collected from anonymous participants. This requires algorithms and techniques that can corroborate and correlate multimodal data from multiple sources in real time. In the current environment of cloud computing, the service discovery and composition smart devices offer, along with identity management, present significant challenges as well.

Addressing these issues becomes especially challenging given that solutions must consider the right balance between stakeholders’ requirements and policies on one hand, and solutions’ utility in terms of quality, timeliness, and cost on the other. Research in spatial computing, in combination with social media, can help address some of these issues.

Online Spatial Event Forecasting in Microblogs

The existing event forecasting models in Twitter generally focus on temporal events whose geo-locations are not available or are not considered in the prediction task. Tweets posted within a certain geographical neighborhood could be able to reflect important spatiotemporal patterns of social event. Thus, the forecasting of spatiotemporal events requires a consideration of spatial features and their correlations in addition to the temporal dimension which poses the following challenges:

Capturing spatiotemporal dependencies

Modeling mixed type observations

Utilizing prior geographical knowledge

This article proposes a new framework to developing spatiotemporal event forecasting models that addresses the above-mentioned issues more effectively. The proposed methodology generatively characterizes the evolutionary development of events, as well as the relationships between the tweet observations both inside and outside the event venue.

The first step is to identify whether the underlying development revealed by a sequence of tweets will lead to an event or not. For this two models are trained to characterize if a development process will lead to an event or not. The training time for the models is typically sensitive to the size of the training set. The development of an event can be indicated by the amount of tweets posted with certain keywords as well as the spatial outbreaks of social media postings. The underlying development of an event is not only reflected by the evolutionary content in tweet texts, but also by the spatial count distribution of event-related tweets.

To test this approach the test set was split into 20 bins on which the prediction performance and its standard deviations were evaluated. The data was collected for civil unrest event forecasting in Mexico through Datasift’s Twitter collection engine from January 1, 2013, to June 1, 2013. For the analysis of flu forecasting, they collected tweets containing at least one of 124 predefined flu-related keywords (e.g., “cold,” “fever,” and “cough”) during the period from January 1, 2011, to December 31. The proposed new approaches achieved the best overall performance in precision, recall, and F1-score, outperforming the five comparison methods by up to 38% in F1-score and 7% in precision.

Discover more:

About this essay:

If you use part of this page in your own work, you need to provide a citation, as follows:

Essay Sauce, Social Media in Crisis: A Literature review. Available from:<https://www.essaysauce.com/information-technology-essays/social-media-in-crisis-a-literature-review/> [Accessed 18-12-24].

These Information technology essays have been submitted to us by students in order to help you with your studies.

* This essay may have been previously published on EssaySauce.com and/or Essay.uk.com at an earlier date than indicated.