Veronica Lynn: Sarcasm in Online Debates

Student's Name: 
Veronica Lynn
lynnv@carleton.edu
Advisor's Name: 
Steve Whittaker
Home University: 
Carleton College
AttachmentSize
PDF icon lynnv_surfit12_report.pdf39.65 KB
Year: 
2012

We designed a survey to collect information about sarcasm using posts taken from online forum debates. For each post, we asked the participants two questions, one concerning the argument's strength and the other identifying sarcastic phrases. The survey was conducted in four iterations: the first on Crowdflower.com and the remaining three on Amazon's Mechanical Turk.

The forum posts we used in the survey were selected from the Internet Argument Corpus (IAC), which is a database of posts scraped from 4forums.com. Posts in the IAC are given in pairs containing a quote and a response to that quote. These quote/response pairs, or QR pairs, are scored on how sarcastic they are, based on a previous study done through Mechanical Turk. The QR pairs are also tagged by topic. We selected a total of 150 QR pairs for use in the surveys: 120 of those were used in the first three iterations of the survey, while the remaining 30 were used in the fourth iteration. The QR pairs we used spanned six topics: 23 were about evolution, 17 gun control, 20 climate change, 31 gay marriage, 39 abortion, and 20 the death penalty. The QR pairs were chosen due to having high sarcasm scores, indicating that the quotes were sarcastic. However, the range and distribution of sarcasm scores varied between topics, so what constituted a high sarcasm score depended on the topic.

We designed two questions for the survey. The first question asked participants to rate the strength of the argument presented in the post. A strong argument was defined as "one which would persuade many people, due to how logically it is structured, how emotionally resonant it is, or both". Respondents were able to choose an integer between 1 and 7, with 1 being "weak" and 7 being "strong. The second question asked participants to identify words and phrases that might contribute to the response being perceived as sarcastic. Participants could copy and paste their selections into a text box. We asked these two questions for each of the 150 QR posts. Each time participants took the survey they saw ten QR pairs, for a total of twenty questions. On Crowdflower, these QR pairs were randomly selected, while on Mechanical Turk they were predetermined.

The first iteration of the survey was posted via Crowdflower.com. Crowdflower provides an interface to design crowdsourcing tasks, but the survey itself is posted to Mechanical Turk. The survey was open to anyone and could be taken up to twelve times, featuring a different set of ten QR pairs each time. Each QR pair could be responded to by up to twenty people. The QR pairs used for this iteration of the survey included 21 on abortion, 21 on evolution, 17 on gun control, 20 on the death penalty, 20 on climate change, and 21 on gay marriage.

The final three iterations were done directly through Mechanical Turk. Mechanical Turk allows more precise control over who is allowed to take a survey than Crowdflower does, so these iterations were limited to the roughly 650 people who took a prequalification survey conducted during a previous study. The prequalification survey asked for basic demographic information, as well as the participant's opinions on various topics. These topics include all those covered by the QR pairs.

Aside from the prequalification, the first of the three iterations on Mechanical Turk was the same as the iteration on Crowdflower. It used the same set of quotes and once again solicited responses from twenty people for each of the QR pairs. The second iteration was the same and simply surveyed an additional eighty people per QR pair.