Tuesday, May 16, 2023

Opinionated Language Models Can Influence

 I decided I needed a different focus for this blog so that I would make the effort to post more frequently. My regular posts can be found at Learning Aloud

I decided that I until further notice I would use this blog to explore AI services and I would mostly post content generated by AI. I intend to first introduce a topic and then attach the AI-generated content. There will be a clear distinction between my comments and the AI post. I will have read the content the AI addresses so I will be pretty clear that the AI material is reasonable.

The first post is based on my understanding that the corpus of content used to train the AI models is based on Internet content. The issue here for me is that some of this content is biased or misinformation and this material is integrated with material that is factual. The question then is where those who rely on the AI content can be misled or their thinking shifted by their exposure to the AI content.

This was the goal of an experiment that exposed individuals to AI content that was purposefully slanted and then examined their "co-authored" content based on this material and their personal beliefs to see if groups exposed to different source material would differ. The citation follows if you want to read the original. 

Jakesch, M., Bhat, A., Buschek, D., Zalmanson, L., & Naaman, M. (2023, April). Co-Writing with Opinionated Language Models Affects Users’ Views. In Proceedings of the 2023 CHI Conference on Human Factors in Computing Systems (pp. 1-15).


********************


Introduction


Large language models (LLMs) are becoming increasingly powerful and sophisticated. These models can generate text, translate languages, and answer questions in a comprehensive and informative way. However, there is a growing concern that LLMs could be used to influence users' views.


A recent study by Jakesch et al. (2023) investigated the effects of co-writing with opinionated language models (LLMs) on users' views. The authors conducted a user study in which participants were asked to co-write a news article with an LLM. The LLM was either opinionated or non-opinionated. The authors found that participants who co-wrote with the opinionated LLM were more likely to adopt the LLM's views than participants who co-wrote with the non-opinionated LLM.


Methods


The authors used a news article about a controversial topic as the writing prompt. The LLMs were trained on a dataset of news articles and blog posts. The participants were asked to write a news article that was at least 500 words long. The participants were also asked to rate their agreement with the LLM's views on a scale of 1 to 5.


Results


The authors found that participants who co-wrote with the opinionated LLM were more likely to agree with the LLM's views than participants who co-wrote with the non-opinionated LLM. The authors also found that participants who co-wrote with the opinionated LLM were more likely to believe that the LLM was a human writer.


Discussion


The authors argue that these findings have implications for the design of LLMs. They suggest that LLMs should be designed to be transparent about their opinions, and that users should be given the opportunity to opt out of co-writing with opinionated LLMs.


Conclusion


The study by Jakesch et al. (2023) provides evidence that opinionated language models can influence users' views. This is a concerning finding, as it suggests that LLMs could be used to spread misinformation or propaganda. It is important to be aware of this potential risk when using LLMs, and to take steps to mitigate it.

Here are some additional thoughts on the implications of this study:

  • LLMs are becoming increasingly popular, and they are being used in a variety of applications. This study suggests that we need to be careful about how we use LLMs, and that we need to be aware of the potential for them to influence our views.
  • It is important to be critical of the information that we encounter online, and to be aware of the potential for bias. We should not simply accept the information that we are presented with, but we should evaluate it carefully and make our own judgments.
  • We need to be aware of the potential for LLMs to be used to spread misinformation or propaganda. We should be critical of the information that we encounter online, and we should be careful about the sources of information that we trust.

[I had personal questions based on my reading of this summary so I asked a few follow-up questions. I used Bard for the initial summary and then ChatPDF for the follow-up.]

Me: What do the treatments require of summarization or paraphrasing on the part of participants in that treatment?


PDF: The treatments in the study did not require summarization or paraphrasing on the part of participants. Instead, participants were shown suggested text generated by a large language model that argued for or against social media. Participants in the treatment group could choose to use the suggested text as-is or modify it as they saw fit. The researchers compared the essays and opinions of participants who used the language-model-powered writing assistant to a control group that wrote their social media posts without writing assistance.


Me: Say more about how the study measured the opinions of participants.


PDF: The study used a variety of measures to assess the opinions of participants. First, participants were asked to write a short statement discussing whether social media is good or bad for society. Treatment group participants were shown suggested text generated by a large language model that argued for or against social media, while control group participants wrote their social media posts without writing assistance. Second, following the writing task, the researchers asked participants to assess social media's societal impact in a survey. Third, a separate sample of human judges (N=500) evaluated the opinions expressed in participants' writing. The judges were asked to rate each essay on a scale from -3 (strongly against social media) to +3 (strongly in favor of social media). The researchers used these measures to compare the essays and opinions of participants who used the language-model-powered writing assistant to those who did not use it.


No comments: