With the accelerated development of state-of-the-art large language models and various successful applications. The risk of exploiting and reinforcing the various types of biases using these language models has increased. Identifying and quantifying bias is an important step in developing mitigation strategies. In a recent paper, the authors demonstrated that GPT-3, a state-of-the-art contextual language model, captures persistent Muslim-violence bias. It is possible to reduce bias to a certain extent by introducing words and phrases into the context that provide strong positive associations.