Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

Evaluating disaster-related tweet credibility using content-based and user-based features

Evaluating disaster-related tweet credibility using content-based and user-based features This study aims to propose an unsupervised learning model to evaluate the credibility of disaster-related Twitter data and present a performance comparison with commonly used supervised machine learning models.Design/methodology/approachFirst historical tweets on two recent hurricane events are collected via Twitter API. Then a credibility scoring system is implemented in which the tweet features are analyzed to give a credibility score and credibility label to the tweet. After that, supervised machine learning classification is implemented using various classification algorithms and their performances are compared.FindingsThe proposed unsupervised learning model could enhance the emergency response by providing a fast way to determine the credibility of disaster-related tweets. Additionally, the comparison of the supervised classification models reveals that the Random Forest classifier performs significantly better than the SVM and Logistic Regression classifiers in classifying the credibility of disaster-related tweets.Originality/valueIn this paper, an unsupervised 10-point scoring model is proposed to evaluate the tweets’ credibility based on the user-based and content-based features. This technique could be used to evaluate the credibility of disaster-related tweets on future hurricanes and would have the potential to enhance emergency response during critical events. The comparative study of different supervised learning methods has revealed effective supervised learning methods for evaluating the credibility of Tweeter data. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png Information Discovery and Delivery Emerald Publishing

Evaluating disaster-related tweet credibility using content-based and user-based features

Loading next page...
 
/lp/emerald-publishing/evaluating-disaster-related-tweet-credibility-using-content-based-and-yv76AgVFXg

References (42)

Publisher
Emerald Publishing
Copyright
© Emerald Publishing Limited
ISSN
2398-6247
DOI
10.1108/idd-04-2020-0044
Publisher site
See Article on Publisher Site

Abstract

This study aims to propose an unsupervised learning model to evaluate the credibility of disaster-related Twitter data and present a performance comparison with commonly used supervised machine learning models.Design/methodology/approachFirst historical tweets on two recent hurricane events are collected via Twitter API. Then a credibility scoring system is implemented in which the tweet features are analyzed to give a credibility score and credibility label to the tweet. After that, supervised machine learning classification is implemented using various classification algorithms and their performances are compared.FindingsThe proposed unsupervised learning model could enhance the emergency response by providing a fast way to determine the credibility of disaster-related tweets. Additionally, the comparison of the supervised classification models reveals that the Random Forest classifier performs significantly better than the SVM and Logistic Regression classifiers in classifying the credibility of disaster-related tweets.Originality/valueIn this paper, an unsupervised 10-point scoring model is proposed to evaluate the tweets’ credibility based on the user-based and content-based features. This technique could be used to evaluate the credibility of disaster-related tweets on future hurricanes and would have the potential to enhance emergency response during critical events. The comparative study of different supervised learning methods has revealed effective supervised learning methods for evaluating the credibility of Tweeter data.

Journal

Information Discovery and DeliveryEmerald Publishing

Published: Jan 20, 2022

Keywords: Emergency response; Twitter; Supervised machine learning; Unsupervised learning; Credibility evaluation; Data annotation

There are no references for this article.