Published on Fri Feb 01 2019

Examining the Presence of Gender Bias in Customer Reviews Using Word Embedding

A. Mishra, H. Mishra, S. Rathee

Reviews play an indispensable role in several business activities ranging from product recommendation to targeted advertising. We question whether reviews might hold stereotypic gender bias. We examine the impact of gender bias in reviews on choice and conclude with policy implications for female consumers.

0
0
0
Abstract

Humans have entered the age of algorithms. Each minute, algorithms shape countless preferences from suggesting a product to a potential life partner. In the marketplace algorithms are trained to learn consumer preferences from customer reviews because user-generated reviews are considered the voice of customers and a valuable source of information to firms. Insights mined from reviews play an indispensable role in several business activities ranging from product recommendation, targeted advertising, promotions, segmentation etc. In this research, we question whether reviews might hold stereotypic gender bias that algorithms learn and propagate Utilizing data from millions of observations and a word embedding approach, GloVe, we show that algorithms designed to learn from human language output also learn gender bias. We also examine why such biases occur: whether the bias is caused because of a negative bias against females or a positive bias for males. We examine the impact of gender bias in reviews on choice and conclude with policy implications for female consumers, especially when they are unaware of the bias, and the ethical implications for firms.

Thu Jul 21 2016
NLP
Man is to Computer Programmer as Woman is to Homemaker? Debiasing Word Embeddings
The blind application of machine learning runs the risk of amplifying biases. Such a danger is facing us with word embedding, a popular framework to represent text data as vectors. We show that even word embeddings trained on Google News articles exhibit female/male gender stereotypes to a disturbing extent.
2
0
4
Mon Jun 20 2016
Machine Learning
Quantifying and Reducing Stereotypes in Word Embeddings
Machine learning algorithms are optimized to model statistical properties of the training data. We show across multiple datasets that the embeddings contain significant gender stereotypes. We developed an efficient algorithm that reduces gender stereotype using just a handful of examples.
0
0
0
Tue Oct 06 2020
Artificial Intelligence
Robustness and Reliability of Gender Bias Assessment in Word Embeddings: The Role of Base Pairs
Bias is a complex concept and there exist multiple ways to define it. It has been shown that word embeddings can exhibit gender bias. Various methods have been proposed to quantify this. The extent to which the methods are capturing social stereotypes has been debated.
0
0
0
Sat Jun 16 2018
Artificial Intelligence
Biased Embeddings from Wild Data: Measuring, Understanding and Removing
Many modern Artificial Intelligence (AI) systems make use of data embeddings. These are learnt from data that has been gathered "from the wild" and have been found to contain unwanted biases. In this paper we make three contributions towards measuring, understanding and removing this problem.
0
0
0
Sun Oct 25 2020
NLP
Fair Embedding Engine: A Library for Analyzing and Mitigating Gender Bias in Word Embeddings
Non-contextual word embedding models inherit human-like biases of gender, race and religion from the training corpora. Fair Embedding Engine (FEE) is a library for analysing and mitigating gender bias in word embeddings.
0
0
0
Wed Apr 14 2021
Artificial Intelligence
[RE] Double-Hard Debias: Tailoring Word Embeddings for Gender Bias Mitigation
0
0
0