Catch up on the latest AI articles

Can Recommendations Influence Ideology: News Recommendations Focused On Political Position Differences Among Topics

Can Recommendations Influence Ideology: News Recommendations Focused On Political Position Differences Among Topics

Recommendation

3 main points
✔️ News recommendation to eliminate the filter bubble in terms of political position (liberal or conservative) bias
✔️ STANPP, where the objective function is designed to reduce the influence of words that determine the political position in the news, MTANPP, where the objective function is designed to increase the influence of topic-specific words in the news. MTANPP, a multitask learning MTAN, and MTANPP, a combination of the two, is proposed.
✔️ We experimented with a large-scale language model (BERT), in which news recommendation is a binary classification of user preference or not.

Reducing Cross-Topic Political Homogenization in Content-Based News Recommendation
written by Karthik Shivaram , Ping Liu , Matthew Shapiro , Mustafa Bilgic , Aron Culotta
(Submitted on Sep 2022)
Comments: RecSys
 

The images used in this article are from the paper, the introductory slides, or were created based on them.

Introduction

The importance of the recommendation function in news applications is increasing day by day. News recommendations are extracted and ranked from the vast number of articles published daily based on various factors such as content, user interest, and current events. Only selected articles are displayed, thus eliminating information overload for users.

However, recommendations that give too much weight to user interests can lead to filter bubbles. The filter bubble is a phenomenon in which recommendations based on user interests, such as search logs and access logs, resulting in the recommendation of only the information that the user wants to see, isolating the user from information that he or she does not want to see or that does not match his or her ideas, and isolating his or her ideas and values like a "bubble". The filter bubble was defined by Pariser in 2011. Since then, it has been actively debated in the recommendation field and is one of the most important topics in news recommendation.

This paper proposes a new approach to filter bubbles in news recommendations, specifically focusing on political positions.

News recommendations focusing on differences in political positions between topics

News recommendations based on user interests cause bias in the recommendation results, but there are many different types of bias, including emotional polarity and article topic. In this study, we paid particular attention to political positions, i.e., liberal or conservative bias.

Such political positions and views tend to be misinterpreted as uniformity on any topic if individuals are identified, such as "this person is a conservative" and "that person is a liberal". However, surveys of Americans have shown that many people take a variety of political positions depending on the topic, such as "liberal on population abortion but conservative on immigration.

The problem here is that news recommendations can bias political positions on newly emerging topics, influenced by past access logs.

For example, assume that a user who is known from past access history to prefer articles with a conservative position on gun control is recommended an article on the new topic of population abortion. In this case, the access log for the gun control article would infer a "preference for conservative articles," and the recommendation result would reflect this preference for the population abortion article. However, if the user prefers articles with a liberal position on population abortion, then the recommendation is not only misguided but also a misrepresentation of a lack of diversity in political positions.

This bias and homogenization of political positions among different topics caused by news recommendations were defined by the authors as cross-topic homogenization. This paper aims to address this issue.

In this study, two types of Attention-based deep learning models were proposed. The first is an objective function that penalizes words that characterize political positions, liberal or conservative, collected independently by the authors, so that they are less likely to affect the prediction results. The other approach is to weight topic-specific words. We also tested a method that combines the two methods.

The authors used a dataset of 900,000 labeled political positions to test the "Does the user prefer a particular article or not?" The recommendation is formulated by a binary classification called We assume that the user has opposite political positions on two topics, such as "prefers liberal articles on topic A and conservative articles on topic B."

Related Research on News Recommendation

This section provides an introduction to the related studies in this study.

First, an introduction to a study that addresses news recommendation bias. This study focuses on a new bias, political stance bias. However, there are various other types of news recommendation bias, such as popularity bias (Popularity bias) and exposure bias (Exposure bias). These biases lead to the homogenization of recommended items, which leads to phenomena such as filter bubbles and echo chambers. Political position bias in recommendation results can lead to political polarization, with public opinion divided between liberals and conservatives. This is especially true for news recommendations, and various methods have been proposed to achieve diversity in news recommendation results.

Next, we will introduce existing methods for news recommendation. While many methods have already been proposed for news recommendation, in recent years deep learning-based models have been known to perform particularly well. Many existing methods for deep learning-based news recommendation are based on Attention, which acquires both user and news representations (vectors) by learning from past click logs and predicts click rates for unknown items. In recent years, pre-trained language models such as BERT have been used to improve the performance of both user and content representations.

Thus, it can be seen that discussions on news recommendation methods, improving diversity, etc., have been active in recent years. However, news recommendation focusing on the diversity of political positions, especially the differences in political positions among topics, which this study proposes, has not yet been proposed.

Issue setting and data set

problem-solving

In this study, we consider text recommendation as a simple binary classification that "predicts the probability (Feedback Labels) of a user liking an article for a single user.

Article: $a = \{a_1, ... , a_n\}$
Feedback Labels: $ y = \{y_1 , ... , y_n\} ( y = 1 → Preferred, y = 0 → Not preferred) $

Here, the article list $a$ is then composed of two types of topic 1 and topic 2, where the Feedback labels are labeled "user prefers conservative articles on topic 1 and liberal articles on topic 2" to simulate a user whose political positions are opposite between the topics. topic 2 is liberal" to simulate a user with opposite political positions on the topics.

900K news articles from 41 news sources

The experiment utilizes a dataset of 900,000 news articles from 41 different news sites obtained from Liu et al. These news articles are labeled with $\{-2,-1,0,1,2\}$ and five levels of political positions. -2 is the most liberal and +2 is the most conservative. This study uses 100,000 samples from that data set.

Data set construction work for experiments

The 100,000 news articles extracted were labeled for political positions, but not for topics. Therefore, in this study, topics were extracted by unsupervised clustering using the following procedure

1. Extract features from news articles by tf-idf
2. Unsupervised classification of 100,000 data into 100 classes by the k-means method

From the unsupervised classification results obtained in this way, we extracted only clusters that contained 400 or more articles and still had an equal number of politically conservative and liberal articles in each cluster.

As a result, the researchers ultimately obtained 45 different pairs of clusters (90 clusters in total). To confirm that they were correctly classified based on topic, the researchers visually verified that the topics covered included "gun control," "immigration issues," and "healthcare issues."

News recommendation considering political preference bias (proposed method)

Baseline 1: Single Task Network (STN)

As already mentioned, in this study, text recommendation is based on the binary classification of whether a single user prefers a news article or not. The most used method of text classification in recent years is the one that utilizes pre-trained language models. In this study, we experiment with binary classification using BERT as a baseline for a recommendation. Below is an overview of the model.

Baseline 2: Single Task Attention Network (STAN)

As another baseline for this study, we also experimented with a model that adds an Attention layer to BERT: the output of BERT (not only CLS) is input to a linear transformation layer, and its output, normalized by a softmax function, is the Attention Weight.

The Attention Weight is then multiplied by the output vector from BERT. This allows us to weight words in such a way that they affect prediction accuracy.

Proposed Method 1: Single Task Attention Network with Polarization Penalty (STANPP)

From here, we will explain our proposed method: STANPP uses STAN as the deep learning model but sets up a loss function that penalizes words that may affect political positions. In this section, we will look at two aspects of STANPP: the extraction of words that may influence political positions and the loss function.

First, we will discuss the method used to extract words that may influence political positions. As already mentioned, the dataset used by the authors is labeled according to the political position (liberal or conservative) of the article. There are various methods for extracting words that affect a particular label, but here we extracted 200 words that affect the "label" using a chi-square test. The following is an example.

The next step is to set up an objective function that penalizes words that affect political positions. Here, BERT is used to obtain the embedded representations of the R words extracted earlier that affect political positions. Then, we calculate the similarity between the embedded expressions of the R words and the vectors output from STAN and use this calculation as one of the error functions so that the political position of the article does not affect the prediction.

The above is the description of STANPP.

Proposed Method 2: Multitask Attention Network (MTAN)

Next is another proposed method, MTAN, which aims to increase the impact of words that are not political positions, but rather that determine the topic of the article, on the predicted results.

Now, we need to estimate the words that determine the topic, but the dataset used by the researchers is not labeled with topics, so it is not possible to estimate words such as political positions at the time. Therefore, we apply "binary negative sampling," which is also used in word2vec training, to the task of predicting the words in the title (headline) of an article. For each article $a_i$, we predict whether or not a specific word $h_i$ (extracted from the title and masked) is included. Predicts whether $h_i$ (extracted from the title and masked) is included or not.

Specifically, a list of $h_i$ candidate words in the title is entered into BERT to obtain the representation vector $r_{h_i}$. Next, it is multiplied by the article's Attention weight $u_{it}$ to obtain $g_i$. Then, a linear transformation is performed to predict whether $h_i$ is included in the title of $a_i$.

Proposed Method 3: Multitask Attention Network with Polarization Penalty (MTANPP)

And finally, MTANPP, which combines STANPP and MTAN, adds STANPP's objective function to MTAN, and its objective function can be expressed as follows

experimental results

evaluation experiment

From this point on, we will conduct an evaluation experiment using the dataset on which we built the model described in the previous section. The dataset consists of a list of articles a and a label y indicating the user's preference for each article. Article list a consists of two topics, and y is labeled such that the political position differs between the two topics.

The ratio of the two topics 1 and 2 is 90% and 10%. One of the purposes of this study is to prevent Topic 2 from being influenced by the political position of the Topic 1 article when a new Topic 2 article appears.

Experiments were conducted on all 45 pairs created, and the average of the validation results was calculated. In addition to the model just described, we also experimented with UNBERT, a method of acquiring textual representations for news recommendations proposed in a previous study, for comparison.

result

Below are the evaluation results.

The evaluation results show that the proposed models, STANPP, MTAN, and MTANPP, tend to produce accuracies that are 3% to 6% higher for Topic 2 than the baseline STN and STAN models. The models also tend to produce 1% ~ 8% higher Accuracy for Topic 1 than the baseline STN and STAN models.

summary

In this paper, we proposed an Attention-based news recommendation method to prevent the homogenization of recommended items by focusing on the political positions of news articles. we applied the proposed method to a dataset of users with opposite political positions between two topics and found that the proposed method performed better than the baseline higher performance than STN and STAN.

Future issues include.

  • A user study will be conducted to examine the effectiveness of the proposed methodology.
  • More attention needs to be paid to the explanatory nature of the model discussion.
  • Because the data was collected from multiple news sites, there is potential for bias in the labels of the articles.

These include

Filter bubbles are an important issue in the field of recommendation systems. I hope that in the future, recommendation systems that take into account this diversity of political positions will be implemented in the real world.

If you have any suggestions for improvement of the content of the article,
please contact the AI-SCHOLAR editorial team through the contact form.

Contact Us