Get 20M+ Full-Text Papers For Less Than $1.50/day. Start a 14-Day Trial for You or Your Team.

Learn More →

CNN-BERT for measuring agreement between argument in online discussion

CNN-BERT for measuring agreement between argument in online discussion With the rise of online discussion and argument mining, methods that are able to analyze arguments become increasingly important. A recent study proposed the usage of agreement between arguments to represent both stance polarity and intensity, two important aspects in analyzing arguments. However, this study primarily focused on finetuning bidirectional encoder representations from transformer (BERT) model. The purpose of this paper is to propose convolutional neural network (CNN)-BERT architecture to improve the previous method.Design/methodology/approachThe used CNN-BERT architecture in this paper directly uses the generated hidden representation from BERT. This allows for better use of the pretrained BERT model and makes finetuning the pretrained BERT model optional. The authors then compared the CNN-BERT architecture with the method proposed in the previous study (BERT and Siamese-BERT).FindingsExperiment results demonstrate that the proposed CNN-BERT is able to achieve a 71.87% accuracy in measuring agreement between arguments. Compared to the previous study that achieve an accuracy of 68.58%, the CNN-BERT architecture was able to increase the accuracy by 3.29%. The CNN-BERT architecture is also able to achieve a similar result even without further pretraining the BERT model.Originality/valueThe principal originality of this paper is the proposition of using CNN-BERT to better use the pretrained BERT model for measuring agreement between arguments. The proposed method is able to improve performance and also able to achieve a similar result without further training the BERT model. This allows separation of the BERT model from the CNN classifier, which significantly reduces the model size and allows the usage of the same pretrained BERT model for other problems that also did not need to finetune their BERT model. http://www.deepdyve.com/assets/images/DeepDyve-Logo-lg.png International Journal of Web Information Systems Emerald Publishing

CNN-BERT for measuring agreement between argument in online discussion

Loading next page...
 
/lp/emerald-publishing/cnn-bert-for-measuring-agreement-between-argument-in-online-discussion-Dw2DhX9DZh
Publisher
Emerald Publishing
Copyright
© Emerald Publishing Limited
ISSN
1744-0084
eISSN
1744-0084
DOI
10.1108/ijwis-12-2021-0141
Publisher site
See Article on Publisher Site

Abstract

With the rise of online discussion and argument mining, methods that are able to analyze arguments become increasingly important. A recent study proposed the usage of agreement between arguments to represent both stance polarity and intensity, two important aspects in analyzing arguments. However, this study primarily focused on finetuning bidirectional encoder representations from transformer (BERT) model. The purpose of this paper is to propose convolutional neural network (CNN)-BERT architecture to improve the previous method.Design/methodology/approachThe used CNN-BERT architecture in this paper directly uses the generated hidden representation from BERT. This allows for better use of the pretrained BERT model and makes finetuning the pretrained BERT model optional. The authors then compared the CNN-BERT architecture with the method proposed in the previous study (BERT and Siamese-BERT).FindingsExperiment results demonstrate that the proposed CNN-BERT is able to achieve a 71.87% accuracy in measuring agreement between arguments. Compared to the previous study that achieve an accuracy of 68.58%, the CNN-BERT architecture was able to increase the accuracy by 3.29%. The CNN-BERT architecture is also able to achieve a similar result even without further pretraining the BERT model.Originality/valueThe principal originality of this paper is the proposition of using CNN-BERT to better use the pretrained BERT model for measuring agreement between arguments. The proposed method is able to improve performance and also able to achieve a similar result without further training the BERT model. This allows separation of the BERT model from the CNN classifier, which significantly reduces the model size and allows the usage of the same pretrained BERT model for other problems that also did not need to finetune their BERT model.

Journal

International Journal of Web Information SystemsEmerald Publishing

Published: Dec 12, 2022

Keywords: Online discussion; Stance polarity and intensity; Natural Language Processing; Applications of Web mining and searching

References