The presentation discusses the capabilities and potential risks of GPT-3, a large language model trained on open internet data, in generating deceptive, biased, or abusive language at scale. The tool can be used to spin news stories and persuade people to change their stance on major issues.
- GPT-3 is a language model trained on open internet data that can generate deceptive, biased, or abusive language at scale
- The tool can be used to spin news stories and persuade people to change their stance on major issues
- The quality of data used to train GPT-3 may have to be sacrificed, as it requires a lot of data to train such a large model
- GPT-3 can generate outputs that are compelling and almost human-like, but not always authentic
- The presentation includes a demo tool called Twatter, which is a GPT-3 only social media site that can generate tweets with a selected temperature and number of tweets
- The presentation also includes a survey that assesses GPT-3's persuasive power in changing people's stance on major issues
- The survey results show that GPT-3 can generate persuasive arguments for and against major issues, but the authenticity and effectiveness of the arguments vary depending on the issue and the target audience
The presentation includes an anecdote about GPT-3's ability to spin news stories. When given an AP article about the chaos at the Capitol on January 6th, GPT-3 was able to rewrite it in a strongly pro-Trump way, suggesting that the tool can be used to seed fake news stories and support a chosen narrative.