This week I attended an afternoon conference run by Chris Carter and Sarah Glozer entitled Social Media Research Insights (you can find some of the Twitter commentary at #SMRinsights).
The reason why I attended is because my research is focused around social media platforms used by members of the Inflammatory Bowel Disease community. Using user-generated content for purposes not intended by the author makes me feel uncomfortable, especially when it is used without fully informed consent. In fact, it’s one of my motivations for conducting this work in the first place. I don’t agree with data scraping and using social media data out of context no matter how rich the source. Instead, I’m more interested in understanding people’s experiences using social media platforms and how they come to trust them.
A pertinent part of Rebecca Whiting’s keynote was when she drew on the work of Mason and Schultze (2012). They suggest “in light of the increasing entanglement between technology and people in everyday life today, we maintain that revisiting the ethical issues surrounding online research is warranted,” (p 303). In short, ethics shift over time as cultural practices change. In the advent of connected technologies, the ways we communicate have evolved, but our ethics are yet to catch up. We should therefore be mindful of ethics from the beginning of projects and revisit them throughout.
Very relevant insights by @DrRWhiting on ethical issues when publishing social media data based research #SMRinsights pic.twitter.com/QNTpRs6w9l
— Michael Etter (@SoResearchMike) July 25, 2017
Social Media Ethics Framework draws on three areas:
- Legal – are you keeping within the terms & conditions of platforms?
- Privacy and Risk – is the data personal? Do the benefits of the research outweigh potential or actual harm?
- Re-use and publication – can publication increase risk of harm or can participants be re-identified.
Whiting emphasised the highly personal nature of qualitative data. Using qualitative social media data can be extremely problematic to ensure that participants are not re-identifiable. Helena Webb touched upon this in her talk around a project (Digital Wildfire) she has been involved with that addresses hate speech on Twitter. She and her colleagues deliberated over ethics: can using direct quotes from Twitter cause harm towards someone even if their name and Twitter handle is removed? Re-identification is also possible with ‘anonymous’ datasets as discussed by Zimmer (2010), Narayanan & Shmatikov (2009).
This then segways into the debate of informed consent (one of the audience members contested that it is an ‘obsession’). Logistically how do you obtain informed consent from a huge amount of social media users? If the data is ‘public’ should users assume it’s available for other uses? Do users need informed consent from data collection? Again, this relates back to Mason and Schultze’s research that explores the human subjects in internet research (2012). How can we discern that data is an extension of a human? Referencing White (2002) they state:
“Not only is the user’s body geographically separated from her digital rendering, but there are also temporal distantiations: a user’s words and actions are stored as digital material and remain accessible even after the originator is deceased. Furthermore, there might be multiple users behind one virtual body (White, 2002a)”
So what does this all mean for my research?
I feel that I have always been quite considerate thinking about the ethics around my research. The participants I will work with are considered to be vulnerable and so I want to do my best to ensure their data is protected (whether they use a pseudonym or not).
- I will not be scraping social media data; the data created by participants will be done so with informed consent and for the purpose of the research.
- I would like to create an advocacy group that will help co-design the studies to help ensure they aren’t too risky.
There were other aspects of the conference, but I feel that this was the most relevant to my research. I am going away with lots of references to look through around ethics, risk and vulnerability.
Narayanan, A., and Shmatikov, V. (2009). De-anonymizing Social Networks. 30th IEEE Symposium on Security and Privacy
Schultze, U., and Mason, R. (2012). Studying cyborgs: re-examining internet studies as human subjects research. Journal of Information Technology.
White, M. (2002a). Regulating Research: The problem of theorizing community on LambdaMOO, Ethics and Information Technology 5(1): 55–70.
Zimmer, M. (2010). ‘‘But the data is already public’’: on the ethics of research in Facebook. Ethics in Information Technology. Springer Science + Business Media.