Catch up on the latest AI articles

The SoTA Model In The Task Of Detecting Fake News On Social Networking Sites Is Now Available!

The SoTA Model In The Task Of Detecting Fake News On Social Networking Sites Is Now Available!

Rumor Detection

3 main points
✔️ Proposes a novel graph-based neural network for Rumor Detection
✔️ Learns unique features per view by treating conversation threads as images, nodes as pixels, and multiple views as image channels
✔️ Experimental results on two datasets show that the proposed model outperforms multiple state-of-the-art models

Exploring Graph-aware Multi-View Fusion for Rumor Detection on Social Media
written by Yang WuJing YangXiaojun ZhouLiming WangZhen Xu
(Submitted on 8 Nov 2022)
Comments: Published on arxiv.

Subjects: Computation and Language (cs.CL)

code:  

The images used in this article are from the paper, the introductory slides, or were created based on them.

Introduction

Rumor Detection is a task that automatically detects rumors on social networking sites, and has been attracting attention in recent years because it leads to the early detection of fake news and rumors.

Existing research has focused on learning indicative cues from conversational threads (posts and replies to a topic), but these methods only model features for each of the various views (a collection of threads, such as a bulletin board) and are unable to successfully combine features from multiple views. However, these methods only model features for each of the various views (a collection of threads, such as a bulletin board) and cannot combine features from multiple views.

This paper describes a new framework, GMVCN (Graph-aware Multi-View Convolutional Neural Network), which encodes multiple views based on Graph Convolutional Networks (GCN) and uses Convolutional Neural Networks (CNN) to fuse consistent information among all views. The paper proposes a new framework, GMVCN (Graph-aware Multi-View Convolutional Neural Network ), which fuses consistent information among all views using Convolutional Neural Networks (CNN).

Rumor Detection

Although social networking has become an essential platform for people to obtain and share information, many issues have become problematic, including the spread of rumors.

Rumor is a phenomenon in which (1) correct information, (2) incorrect information (fake news), and (3) information whose authenticity is unknown spread among people on SNSs. Since SNSs lack effective authentication technology for user-generated content, these problems can significantly reduce the reliability of SNS information. This problem can significantly reduce the credibility of information on SNSs.

It is impossible to manually verify the authenticity of a vast amount of information on social networking services. To solve this problem, Rumor Detection, a task that automatically detects the authenticity of the information in a conversation thread, has been attracting attention.

The figure below shows the topological structure of the propagation of retweets for a given tweet, with the two on the left being fake news (A false rumor) and the two on the right being correct information (A true rumor).

As the figure shows, fake news tends to be more likely to be more diffuse in its propagation than regular information because it must attract the attention of more people to spread quickly.

Also in terms of text content, fake news is likely to receive many comments and inquiries for correction, while correct information tends to be supported by the public.

In this paper, we propose a new framework, GMVCN (Graph-aware Multi-View Convolutional Neural Network), to detect fake news based on differences in the graph structure of such rumors.

GMVCN (Graph-aware Multi-View Convolutional Neural Network)

GMVCN (Graph-aware Multi-View Convolutional Neural Network) consists of three components: (1) Multi-View Embedding, (2) Multi-View Fusing, and (3) Classification, as shown in the figure below. (1) Multi-View Embedding, (2) Multi-View Fusing, and (3) Classification.

Multi-View Embedding

Color images can be represented by red, green, and blue channels (RGB), and each pixel position has three values, each corresponding to RGB. In addition, in computer vision, the information in the View of the RBG is efficiently integrated by a CNN for tasks such as image classification.

Taking this as a hint, GMVCN considers the conversation threads in the RUMOR as color images and each node as a pixel for training.

The Top-down and Bottom-up views shown above are considered two channels of the image, and both views share a set of nodes with initial values.

To capture view-specific features, we utilize GCN to encode individual views and update node embeddings according to view-specific structures.

Multi-View Fusing

After Multi-View Embedding, a CNN-based sub-module is used to obtain consistent and complementary information between the two views and fuse it into a vector for prediction.

Classification

Finally, it predicts the truth or falsity of the input conversation threads by connecting the learned conversation thread representations to a layer normalized by a softmax function.

Experiments

In this paper, experiments were conducted to compare the performance of GMVCN with several baselines.

Datasets

The following two public datasets were used in this experiment to evaluate the effectiveness of the GMVCN

  • SemEval-2017: a dataset consisting of 325 conversation threads divided into the training set, development set, and test set. These conversation threads are related to 10 different events.
  • PHEME: Dataset consisting of 2402 conversation threads related to 9 different events.

Cross-validation was performed on these datasets, using the conversation threads associated with one event in each boundary for testing and those associated with the other eight events for training.

Baselines

The following eight baseline models were used in this experiment for comparison with the GMVCN

  • BranchLSTM: An architecture that models the branching of conversation threads based on LSTM (Long Short-Term Memory)
  • TD-RvNN: An approach using a tree-structured recursive neural network to model the Top-down propagation structure.
  • Hierarchical GCN-RNN: An approach that models the structure and temporal characteristics of threads using GCN and RNN, respectively.
  • PLAN: Transformer-based model with conversation threads encoded using a randomly initialized Transformer
  • Hierarchical Transformer: an extension of BERT that encodes all conversational thread interactions based on the Transformer
  • Bi-GCN: GCN-based model for learning high-level expressions from top-down and bottom-up views of a conversation thread
  • ClaHi-GAT: A GAT-based model that represents conversation threads as an undirected graph
  • EBGCN: A variant of Bi-GCN, a model in which the weights of unreliable relationships are adjusted using Bayesian methods

Macro-F1 and Accuracy are also used as evaluation indicators.

Result

The experimental results are shown in the table below.

Thus, comparative experiments confirm that GMVCN performs significantly better than all baseline models on the two data sets.

summary

How was it? In this article, we described a paper that proposed GMVCN (Graph-aware Multi-View Convolutional Neural Network), a new framework that encodes multiple views based on a graph convolutional network and fuses consistent information among all views using a convolutional neural network. Neural Network), a new framework that uses convolutional neural networks to fuse consistent information across all views.

Comparative experiments using two real-world datasets have confirmed that GMVCN outperforms existing models, and future trends will be closely watched as a countermeasure to the current problem of fake Newson social networking sites.

The details of the architecture and dataset of the model presented here can be found in this paper for those interested.

  • メルマガ登録(ver
  • ライター
  • エンジニア_大募集!!

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us