Welcome! I am Tracy Keys, and you can find me on Instagram @benjibex.
This blog is all about my passion for media, entertainment, fashion, society and data science, and is a showcase for my creations as I develop from data lover to communication data science professional.
I’m just getting this new blog going, so right now it’s mostly the transfer of my academic papers and blogs into one place. Ultimately, I want to express myself with data science and explore society through this medium: data science is also an art and highly creative as well as being analytical. The work to date comes from my journey of exploration and learning, but bringing it to life with my tone of voice will no doubt be a lifelong addiction. Stay tuned for more blog entries. Subscribe below to get notified when I post new updates.
Lilly Irani conducted a multiple year critical study of design schools and entrepreneurship in India, participating as a designer and an ethnographer with one particular design studio for a number of years. Irani is both a researcher and an tech professional. In her book, Chasing Innovation, she coins the term “entrepreneurial citizen” to explain how civil movements and centralised government planners have handed over the responsibility of social change and nation building to the entrepreneur. Entrepreneurs, and the designers Irani studied, who once had managerial roles responsible for building businesses for investors, are now encouraged to take their passion for social justice and development and channel it into scalable design projects with global market possibilities for growth and profit. Irani points out that proponents of very different models of development still all believe that entrepreneurship and private innovation is the answer to India’s problems.
According to Irani’s research, this change has been brought about through communication in order to supports the capitalist ideology of the elites, political and industrial interests. Rather than rising up against these dominant groups, people are persuaded through communication to believe that anyone can be an entrepreneur no matter what their background, and that they can create enterprising ventures, save the world and make money all at the same time. In this way, Irani’s concept of entrepreneurial citizenship is a very subtle tool of hegemony. Instead of political dissent, issues are framed as “opportunities” for “value creation”. If the target users (usually the poor of India) are not interested in the proposed solution because its not their number one problem, these concerns are labelled “perceptions” and blithely ignored because they do not align with the investors priorities.
Irani explains that entrepreneurial citizenship empowers the middle class, who are well educated, outspoken, can speak English and convince investors that they add some intangible but essential value. This value add tends to be a theatre of empathy with their potential product adopters, because as noted above, it’s the investors priorities that come first, not the users.
It does not empower the users unless the dominant class also benefit from their adoption of the entrepreneurial development. Irani speaks of how in fact capitalism can leave so many people dispossessed and in poverty, as it moves value up the chain and away from the people who labour.
At the end of the day, investors want a product with global scale, so something that is adopted by millions of India people is very attractive. This idea of entrepreneurial citizenship empowers global investors to get access to Indian designers and the poor of India as potential consumers, so they too benefit.
Governments are still in control of vast public resources and networks, and they ultimately determine what designs are funded, and now instead of having to explain their decisions to convince the public, they can now just fund innovations and be seen to be supporting development of all peoples. So this concept of the entrepreneurial citizen empowers them too, or at least maintains their position of power.
Hence the idea of the entrepreneurial citizen gives the middle class enough of a carrot to stop them from being effective political opponents to those already in power, and maintains the power of the elites, governments and industry, and keeps the poor in poverty.
In fact, Irani explains that design is essentially hegemony too, because if product design is done well, it is invisible and steers the users to outcomes deemed desirable by the investors without them being aware of it. So the entrepreneurial citizen and their practice of human centred design does not seem likely to be the silver bullet for India’s growing population and poverty.
Reading Wen Wen, and Gu and Shea and a preview of work by Lindtner about Shenzen, I was asked to reflect on what is Chinese maker culture and who or what forces are cultivating it and why.
These works put forward a picture of Chinese maker culture as a hard-working ethos, that has arisen from traditional manufacturing, moved to shanzai (imitation phone manufacturing in small batches) to what Wen calls 0-1 makers and makers to makers which are individual makers and makers working together in makerspaces. Lindtner describes shanzai as an idealized form of grassroots experimentation that operated across a range of scales, from local to vast, and how it is perceived as “hacking with Chinese characteristics”, or the orientalised version of making. It is apparent that Chinese maker culture and identity came about through social engineering from government, business people and foreign interests cultivating it as opposed to being a grass roots movement. Wen ironically calls it “top-down grass-roots innovation”. As a result there are different points of view on what Chinese maker identity means depends on which cultivating force you are considering.
According to Lindtner, Eric Pan founder of the Shenzen based open source hardware company Seed Studios is a champion of shanzai 2.0, which combines the manufacturing efficiency of shanzai with open-source and open innovation. Pan struck a nerve when he situated shanzai in a long tradition of Chinese ingenuity, aligning shanzai to the Philopsher and inventor Mozi. Lindtner argues that Western maker culture is all about fun, social impact, self-realisation, invention, hobbies and tinkering, rather than manufacturing efficiency and hacking out of necessity. Pan’s positioning of shanzai 2.0 has legitimised shanzai in the eyes of the makers of the West, because it is aligned with Western values but still providing enough difference and oriental flavour to keep Chinese makers as “other”.
This leads on to why Lindtner argues that Western influencers such as MIT Media Lab (Fab Labs) and Makers and corporations need to see Chinese maker culture as “other”. He describes their disillusionment with the Western techno-utopia, and their fears that the West is a “broken world”, so they look to China as a source of renewed optimism, because of their perception that China is a “temporal other”, stuck in the past and still replete with opportunities for technology to do good. Xin and Shea argue that the West use Chinese maker culture as a “soft-landing” for their brands and ideas, to introduce their product to Chinese markets and also access the Chinese manufacturing supply chain.
Lindtner talks about the free culture movement and Lessig’s belief that shanzai creative copying is actually piracy. Essentially unless copying is done in a fun, carefree and profit-free way, it is bad, therefore someone only has a right to hack if they still respect the underlying ideas of IP, ownership and authorship.
But David Li and Huang argues that shanzai represents China’s right to hack, and that IP laws are unethical and it is shanzai that is morally and ethically correct. Chinese makers are like Robin Hood, righting the wrongs of an unfair IP regime imposed by the West.
The Chinese government support makers through tax breaks, infrastructure, education, R&D support and other means, because encouraging frugal innovation is low risk to them as makers are “venture labor” described by Gina Neff last week. Their agenda is to have mass innovation that will continue to refresh the Chinese economy, so that’s how they support Chinese maker culture.
Therefore there are many different rationales and perspectives on Chinese maker culture but they are all working towards giving it legitimacy and relevance.
Excerpts from books by Indergaard and Neff describe Silicon Alley: a new media hub located in downtown New York City from the early 1990s with ambitions to become a regional innovation hub like Silicon Valley in California. Silicon Alley grew rapidly during the dot.com boom but failed to withstand the dot.com crash at the end of the nineties. A number of start ups focused on the NY media and advertising industries were successful such as Razorfish and DoubleClick, but the majority of start-ups could not convert their creativity into commercial corporate value, and many of their clients used them as contract labor and eventually built their own in-house capability.
So why was Silicon Alley unable to thrive as Silicon Valley did? From this weeks readings I can find three possible causes: homophilous networks, lack of resiliency to economic downturn, and hype rather than real industry.
The kids of Silicon Valley were young, hip and edgy. Their version of networking was clubbing and partying, they even called their community “The Scene”. Their networks were dense and homophilious, because being able to network till late created strong friendships but at the same time created exclusive cliques, as it was not something people who lived in the suburbs, or older people or those with family responsibilities could do. But regional innovation hubs need diverse, heterogenous networks to build bridging social capital, that can rapidly diffuse technological change, spark innovative ideas and foster partnerships and collaboration. This is very different to Silicon Valley, which was a tech hub with businesses from all industries, and employees from all over the world, which made it extremely diverse as explained by English-Lueck in our week 4 readings.
Not to mention that all that partying does not make you resilient. Schrock and Neff describe how the lines between work and personal time are blurred by the idea of Silicon Valley, which can cause people to burn out. People were also not geographically mobile, because who you knew was so important in Silicon Alley, so it wasn’t easy to import or export people from the Alley. In addition, Silicon Valley’s reliance on the media and advertising industry as clients of their creative output and lack of manufacturing base made it vulnerable to economic downturns. The creative Silicon Alley initially were able to “rise with the tide” of the dotcom boom, but the bust really hurt Silicon Alley, whereas Silicon Valley was more resilient.
Lastly, Silicon Alley was a victim of its own hype. As Dr Schrock states, Silicon Valley was a promise not a place. Silicon Valley situated in Santa Clara Valley California was absolutely the site of an incredibly productive and profitable regional innovation hub. In “Silicon Valley’s old money”, O’Mara also describes the long heritage of successful people mentoring the next generation of innovators, which really helps them learn directly from their mentors and in turn be very successful. But the export of Silicon Valley was a promise. When, as Indergaard describes, wealthy real estate investor Rubin began wiring vacant buildings in the financial district and offering them to new media start ups for low rents and short leases, he was banking on this promise. The area was branded Silicon Alley to strengthen the brand. Further hype was created by the establishment of associations and newsletters. But at its core, the creativity, edginess and non-confirmity of the Silicon Alley start ups was difficult to convert into business leverage. Established businesses doubted their business acumen from so much publicity around partying. Therefore, a strong business model was never realized and Silicon Alley never really recovered after the dot.com bust.
This research paper explores the psychological and social motivations for using the amazingly popular social media/live streaming platform Twitch by conducting a small survey of users.
Keywords: Live Streaming, Twitch, Social Identity Theory, Uses and Gratifications Theory, Psychological motivations
Psychological Motivations for Twitch users
This research paper explores the psychological and social motivations for using the amazingly popular social media/live streaming platform Twitch. This small survey of 87 participants could not establish a relationship between psychological motivations (information seeking, entertainment or social) and watching continuance intention. However, it did support the hypothesis and align with past research that those users who have online social identities aligned to the broadcaster do use the platform for information seeking and entertainment, and those who align their identities with groups of other audience members use Twitch for social motivations. A novel finding of this research was that whether or not people identified with Group or Broadcasters, they all experience para-social feelings towards the Broadcaster, and also a sense of community with other members, are driven by conformity motivations in their use of the platform, and share a co-experience of Twitch with other users in their group also.
Twitch Live Streaming Platform
Twitch.tv is a live streaming platform where broadcasters stream content on their channel, mostly streams of themselves playing video games (Ewalt, 2013). Twitch is also home to official broadcasts of esports tournaments, and more recently hosting broadcasters of “real life” content (Wikipedia, 2019). Viewers can subscribe to broadcasters’ channels, to watch their live streams, and interact with other viewers and the broadcaster (when they read the messages) via stream chat. Some live streams have over 20,000 concurrent viewers, and the chat messages can stream past at what appears an unintelligible speed to a novice. Twitch audience have their own ways of playing around with chat in the extra-large live streams (greater than 10,000 concurrent viewers), such as using ASCII and copypasta art, in a style called crowdspeak (Ford, Gardner, Horgan, & Liu, 2017). Twitch chat is “simultaneously incoherent and enjoyable” page 5 (Ford et al., 2017). Through combining broadcast and the somewhat incoherent chat, Twitch is a new and unique form of participatory social media (Hu, Zhang, & Wang, 2017; Jenkins, 2006).
Since being spun off from its parent site, justin.tv, in 2011 (Ewalt, 2013; Ford et al., 2017), Twitch.tv’s popularity has continued to grow astronomically . According to Twitch’s own website, they have upwards of 1.3m concurrent viewers at any given moment, over 3m creators streaming monthly, and more than 15 m average daily visitors (Twitch, 2019). Half a trillion minutes were streaming in 2018 (Twitch, 2019), and in 2014 they were the 4th largest streaming site in the US (Ford et al., 2017; Hilvert-Bruce, Neill, Sjöblom, & Hamari, 2018). Clearly, Twitch is meeting a very prevalent need in society.
All of this is very bewildering to new users, or students of communication media who are unfamiliar with the platform. It prompts the question: why do people consume different types of media (Hilvert-Bruce et al., 2018). The purpose of this paper is to hypothesize why people use Twitch and test the hypotheses.
Literature Review and hypothesis development
The theoretical background for this research paper is grounded in two theories related to computer mediated communication: uses and gratification theory and social identity theory. The relationships between the theory, concepts and hypotheses are illustrated in Figure 1.
Social identity theory and social identification concept
In their study of intergroup conflict Tajfel and Turner proposed social identity theory, where people hold multiple social identities along with their individual one (Tajfel & Turner, 1979). These social identities form where we experience a sense of oneness and belonging to a community, and this can happen online (Hu et al., 2017; Xiao, Li, Cao, & Tang, 2012). Individuals seek to create online social identities (even when otherwise anonymous) and these identities help foster trust and information and social exchange between community members (Postmes, Spears, & Lea, 1998; WALTHER, 1996; Xiao et al., 2012). This social identification concept and forming of online social identities leads to continuous use intention(Chang & Zhu, 2011; Hu et al., 2017). For owners of online sites such as Twitch, continuous use intention is a key objective of the site.
Uses and Gratification Theory
Uses and Gratification Theory (UGT) attempts to answer the question about why people choose to consume different types of media (Hilvert-Bruce et al., 2018). According to UGT, media engagement behaviors are aimed at “the fulfilment of individual psychological needs” page 59 (Hilvert-Bruce et al., 2018).
Social motivators of media engagement behavior include information seeking, entertainment, and social motivations such as meeting new people, social interactions and support, sense of community, social anxiety and external support, however research has found social anxiety and external support were not supported as uses for Twitch (Hilvert-Bruce et al., 2018).
Information Seeking and Entertainment
A number of papers have information seeking and knowledge exchange/ sharing as a reason for use of social media platforms and online forums (Chiu, Hsu, & Wang, 2006; Ford et al., 2017; Hilvert-Bruce et al., 2018; Pendry & Salvatore, 2015; Xiao et al., 2012). Entertainment is also a psychological motivator in the use of social networking sites (Chang & Zhu, 2011).
Information seeking and entertainment are important motivators for using Twitch, because audiences can learn how to play games while enjoying watching the most experienced players in the world, either during tournaments or on their live stream channel (Ewalt, 2013; Hilvert-Bruce et al., 2018).
H1.1 Use of Twitch for information seeking and entertainment motivations are positively correlated with intention of continuation of engagement.
Meeting new people, social interactions and sense of community are noted in research as important psychological reasons for using social networking sites (Chang & Zhu, 2011) and live streaming sites (Hilvert-Bruce et al., 2018). A sense of community online involves an individual experiencing feelings of belonging, having a say, fulfilment of needs, feeling a bond with others, and mutual influence between members (Hilvert-Bruce et al., 2018; Mcmillan & Chavis, 1986; Peterson, Speer, & Mcmillan, 2008). Online social ties form between members’ online social identities from the social interactions and sense of community they have, and further reinforce online social identity and social identification concept outlined in the previous section (Hilvert-Bruce et al., 2018; Xiao et al., 2012).
H1.2 Use of Twitch for social motivations are positively correlated with continuous watching intention.
Types and Antecedents of Social Identification
Further to social identity theory, social identification concept and UGT, research identifies two types of social identification for users of live streaming platforms: broadcaster identification and group identification (Choe, 2019; Hu et al., 2017).
Identification with the Broadcaster is motivated by individual identification in the classical sense: wanting to be like someone you admire (Hu et al., 2017). Broadcaster identification on live streaming platforms like Twitch is caused through the effects of para-social activity, where the audience has the illusion of an individual relationship with the broadcaster facilitated by the stream chat and the responses of the broadcaster to individuals requests (through techniques like footing and recruitment as explained by Choe, 2019 and Hu et al, 2017).
H3.1 Para-social experience is positively corelated to Broadcaster Identification through Twitch
This paper hypothesizes that along with para-social experience, audience follow certain broadcasts because they want to learn how they play video games (information) or because they enjoy their streams for entertainment reasons.
H2.1 Use of Twitch for information seeking and entertainment motivations is positively correlated with Broadcaster Identification.
Identification with a group is the sense of community (belongingness and oneness) generated through online social ties and social interactions that occur between online social identities (Hilvert-Bruce et al., 2018; Hu et al., 2017). Group identification occurs through the social interaction with other audience members facilitated through stream chat and also offline (Choe, 2019; Hu et al., 2017) and is caused by the social effects outlined above in UGT section. It can be measured in terms of co-experience, where interaction between members co-creates the community, through cognitive communion, resonant contagion and sense of community (Hilvert-Bruce et al., 2018; Hu et al., 2017).
Of course, communities form in real life (offline) too, and these groups can influence social interaction and sense of community occurring online and also be influenced by it (Jenkins, 2006). Social media enhanced real-time streaming video sites can reduce the physical distance between friends (Lim, Cha, Park, Lee, & Kim, 2012), they can encourage civic activity offline (Pendry & Salvatore, 2015) and generate a virtuous feedback cycle in participatory media (Jenkins, 2006). Chang and Zhu discuss that having a critical mass of friends on social media sites can encourage others to join them (Chang & Zhu, 2011), and therefore this conformity motivation could be another psychological motivation for use of new social networking/live streaming services like Twitch.
H2.2 Use of Twitch for social motivations is positively correlated with Group Identification.
H3.2 Conformity Motivation, Sense of Community and Co-experience is positively correlated to Group Identification
87 participants responded to the survey. The survey was administered over two 24-hour periods on the Amazon MTurk platform. The first batch yielded 52 responses and the second batch 37 responses. The second batch was performed in order to have a large enough sample for regression analysis (although this wasn’t successful). Survey participation was voluntary, and survey participants each received between USD$0.70 and $0.85 compensation via the MTurk platform.
80% of the survey participants identified as male and the remaining 20% as female. 72% of participants were between 25-34 years old, 14% were 35-44, 9% 18-24 and 5% were 45-54 years old. The participants were in either the USA or India as shown in Figure 2 below. The size of the circle indicates the quantity of responses from that location.
Figure 2 Location of participants
The survey consists of 38 questions obtained from past studies outlined in the Literature Review section, whose responses were examined for pairwise positive correlation to prove the hypotheses. See Table 1 for a summary of the areas of the survey and the number of questions.
All responses are measured using a 7-point Likert Scale: Strongly Disagree, Disagree, Somewhat Disagree, Neither agree nor disagree, Somewhat Agree, Agree, Strongly Agree.
This paper measures engagement with Twitch using continuous watching intention (Chang & Zhu, 2011; Hu et al., 2017; Kang, Hong, & Lee, 2009). Although there are other measures such as self-reported frequency, psychological and financial, intention has been chosen as the right balance between easy to measure and yet less subjective (Hilvert-Bruce et al., 2018).
Information seeking and entertainment motivation measures
Three questions explore information seeking motivations and two questions measure entertainment motivation (Chang & Zhu, 2011; Hilvert-Bruce et al., 2018) for Hypotheses H1.1 and H2.1.
Social motivation measures
Social motivations with reference to Hypotheses H1.2 and H2.2 are measured by multiple questions. One question to determine if Twitch is used for Meeting New People (Chang & Zhu, 2011; Hilvert-Bruce et al., 2018), six questions that explore the nature of participant’s online social identities, whether they know others’ or others know their screen name, real name or personality (Postmes, Spears, & Lea, 1998; WALTHER, 1996; Xiao, Li, Cao, & Tang, 2012), and one question to measure Online Social Ties by asking about frequency of communication with other audience members (Xiao et al., 2012).
Group Identification and Broadcaster Identification measures
For Hypotheses H2.1, H2.2, H3.1 and H3.2 Group identification is measured by 2 questions which are two different types of group identification: identification with other audience members and feeling like being in a club with other fans of the broadcaster (Hu et al., 2017; Masayuki Yoshida, Bob Heere, & Brian Gordon, 2015). Broadcaster identification is measured by 4 questions about whether people use Twitch to follow a broadcaster and whether they see them as a model to follow, align with their values or are proud to follow them (Hu, Zhang, & Wang, 2017; Liu, Liao, & Wei, 2015; Shamir, Zakay, Breinin, & Popper, 1998).
For Hypotheses H3.1 and H3.2, para-social experience is measured through three questions regarding recruitment and reactions between individual and broadcaster (Hartmann & Goldhoorn, 2011; Hu et al., 2017). Conformity motivation is measured by asking two questions to see whether people the respondent communicates with are also watching Twitch (Chang & Zhu, 2011). Co-experience is measured by one question regarding cognitive communion (sharing thoughts with other members) and two questions regarding resonant contagion (mutual influence on behavior of audience) (Hu et al., 2017; Lim, Cha, Park, Lee, & Kim, 2012). Sense of community is measured using five questions to determine belongingness, needs fulfilment and other indicators (Hilvert-Bruce, Neill, Sjöblom, & Hamari, 2018; Mcmillan & Chavis, 1986; Peterson, Speer, & Mcmillan, 2008).
Given the results are to be analyzed as correlation of pairwise relationships, support will be calculated by taking the proportion of correlated relationships over total relationships. A hypothesis will be supported if the support is greater than 70%.
Question responses were compared pairwise in this study. There were 38 questions and the results of positive correlations are shown in Tables 2 and 3. There were no negative correlations.
The results for each pairwise comparison is noted in the tables 2 and 3 below, for each hypothesis. The results of the hypotheses involving group identification (H2.2 and H3.2) were split into two because the results were quite different, whereas for broadcaster identification they were aligned across all three measurement questions.
The strength of the pairwise correlations were measured and reported by Qualtrics using p-value, effect size and confidence level of 95%. A correlation is anything with a p value of 0.05 or less. Qualtrics denotes a relationship as subtly positively correlated if it has a p value between 0.05 and approximately 0.01. Anything between 0.01 and 0.00001 is positively correlated and less than 0.00001 is strongly positively correlated. In the tables below, next to each question there is the text of the question, plus quantities of each type of correlated relationship. A subtly positively correlated result is denoted with SPC, a strong correlation with STRONG and positive correlations are either not noted or noted with a PC. The total of correlated relationships over total relationships is also denoted in each cell of the matrix (in brackets and italics) to summarize the overall result and these results are described below.
Table 2 shows the results for H1.1, H1.2, H2.1 and H2.2, and Table 3 shows the results for H3.1 and H3.2.
For H1.1, Table 2 shows 3 of 6 measures of information seeking and entertainment motivation were positively correlated to continuous watching intent.
For H1.2, only 1 of a total 8 responses were positively correlated to continuous watching intent.
For H2.1, 11 of a total 12 responses for information seeking and entertainment motivations were positively correlated to Broadcaster Identification. However, the research also reviewed the positive correlations to Social Motivations for Broadcast Identification and 16 of 24 responses were positively correlated.
For H2.2, where the response was feeling like a group of fans of the broadcaster, 7 of 8 responses for measures of social motivations were positively correlated to Group Identification. In addition, responses for 3 of 6 measures of information seeking and entertainment motivation were positively correlated to Group Identification, which is a different relationship to that posited by H2.2.
For H2.2, where the response was feeling like identifying with the broadcasters followers, 7 of 8 responses for measures of social motivations were positively correlated to Group Identification. In contrast to the Fan Club group identification, only 1 of 6 responses for information seeking and entertainment motivation were positively correlated to Group Identification, which does not support a different relationship to that posited by H2.2.
The results for Hypotheses H3.1 and H3.2 are shown in Table 3, again with H3.2 split for Fans of the Broadcaster and Identifying with other followers.
For H3.1, the antecedent question responses for Broadcaster identification were correlated for para-social experience in 7 of 9 relationships. In addition, sense of community (13/15), conformity motivation (6/6) and co-experience (8/9) relationships were also positively correlated.
For H3.2 for the Club of Fans, there was positively correlation across all relationships, para-social experiences (3/3), sense of community (5/5), conformity motivation (2/2) and co-experience (3/3).
For H3.2 Identifying with other followers group identification, there was positively correlation across almost all relationships, para-social experiences (3/3), sense of community (4/5), conformity motivation (2/2) and co-experience (3/3).
Group- Club of Fans (1)
Group -identify with followers (1)
Brief Sense of Community
fulﬁll my needs (SPC)+PCI have a say about what goes on x3 People in this Twitch channel are good at inﬂuencing each other.I belong in my most watched Twitch channel.(SPC x 2I have a good bond with others in (SPC+ STRONG_ PC)Fulfil my needs (SPC)Good at influencing each other x1 +SPC (13/15)
I have a good bond with others in my most watched Twitch channel. STRONGPeople in this Twitch channel are good at inﬂuencing each other.My most watched Twitch channel helps me fulﬁll my needs.I belong in my most watched Twitch channel. I have a say about what goes on in my most watched Twitch channel.(5/5)
I have a say about what goes on in my most watched Twitch channel.I have a good bond with others in my most watched Twitch channel.People in this Twitch channel are good at inﬂuencing each other.I belong in my most watched Twitch channel.(4/5)
I felt I shared similar thoughts with othe…ence members x 2(2/3)
I felt I shared similar thoughts w(1/1)
I felt I shared similar thoughts(1/1)
Many people I communicate with watch Twitch.tv x2+SPCOf the people I communicate with regularly; many watch Twitch.tv x2+SPC(6/6)
Many people I communicate with watch Twitch.tvOf the people I communicate with regularly; many watch Twitch.tv.(2/2)
Many people I communicate with watch Twitch.tvSPCOf the people I communicate with regularly; many watch Twitch.tv.(2/2)
Experience of parasocial interaction
While I was watching, the broadcaster knew that I reacted to them (SPC x 2+ PC)While I was watching, the broadcaster reacted to what I said or did x 3While I was watching, the broadcaster knew I paid attention to them.(7/9)
While I was watching, the broadcaster knew that I reacted to themWhile I was watching, the broadcaster reacted to what I said or did.While I was watching, the broadcaster knew I paid attention to them.(3/3)
While I was watching, the broadcaster reacted to what I said or did.(SPC)While I was watching, the broadcaster knew that I reacted to them.(3/3)
My behavior was influenced by others in th…dience group x 2+SPCMy behavior influenced others in this audience group x3(6/6)
My behavior influenced others in this audience group of my most watched Twitch channel.My behavior was influenced by othersSTRONG(2/2)
My behavior was influenced by othersMy behavior influenced others(2/2)
The threshold for support is 75% of relationships being correlated.
As support for H1.1 Use of Twitch for information seeking and entertainment motivations are positively correlated with intention of continuation of engagement was 50%, this hypothesis is not supported.
Support for H1.2 1 Use of Twitch for social motivations are positively correlated with intention of continuation of engagement was 16% (1 in 8), this hypothesis is not supported.
Support for H2.1 Use of Twitch for information seeking and entertainment motivations is positively correlated with Broadcaster Identification is 92% (11 in 12), so this hypothesis is supported.
Support for H2.2 Use of Twitch for social motivations is positively correlated with Group Identification where participants felt like a group of fans of the broadcaster was 88% (7 in 8), so this hypothesis is supported. The results for this group using Twitch for information seeking and entertainment motivations were 50% (3 in 6) so do not meet the threshold.
Support for H2.2 Use of Twitch for social motivations is positively correlated with Group Identification where participants identified with other followers of the broadcaster was 88% (7 in 8), so this hypothesis is supported. The results for this group using Twitch for information seeking and entertainment motivations were 16% (1 in 6) so do not meet the threshold.
Support for H3.1 Para-social experience is positively corelated to Broadcaster Identification through Twitch was 78% (7 in 9), so this hypothesis is supported.
Support for H3.2 Conformity Motivation, Sense of Community and Co-experience is positively correlated to Group Identification where participants felt like a group of fans of the broadcaster was 100% (13 in 13), so this hypothesis is supported.
Support for H3.2 Conformity Motivation, Sense of Community and Co-experience is positively correlated to Group Identification where participants identified with other followers of the broadcaster was 92% (12 in 13), so this hypothesis is supported.
Unlike other research, a statistically significant relationship could not be found in this survey for watching continuation intention, for neither information seeking, entertainment or social motivations.
As hypothesized in this research, people who identify with Broadcasters use Twitch for entertainment and information seeking purposes, and those who identify with Groups use Twitch to gratify social motivations.
Unlike Hu et al (Hu et al., 2017), this research found that users who identify with Groups and with the Broadcaster all experience para-social feelings towards the Broadcaster, and also a sense of community with other members, are driven by conformity motivations in their use of the platform, and share a co-experience of Twitch with other users in their group also.
This was a very small sample, so the results are exploratory rather than able to be generalized to a wider population, but it is clear that the respondents of this survey who are users of Twitch have strong social reasons for being on the platform.
Limitations and future directions
The analysis of the contribution of each survey question to the underlying phenomena being measured was very rudimentary i.e based on the proportion of correlated relationships to total relationships. This was due to the limitations of Qualtrics and the researcher’s abilities being limited to that system. Regression analysis was attempted for correlated variables, however the explanatory power was very limited because the sample size was so small. In additional, Qualtrics is unable to conduct contributory factor analysis, so this research relied upon the relationships establishing by the research papers from which the survey questions were drawn (see Table 1). However a lot of these relationships would have been nullified as this is a different sample, and some questions were removed from this survey.
Further analysis could be conducted using a more advanced statistical software package and a larger sample size to overcome these issues.
This small survey of 87 participants could not establish a relationship between psychological motivations (information seeking, entertainment or social) and watching continuance intention. However, it did support the hypothesis and align with past research that those users who have online social identities aligned to the broadcaster do use the platform for information seeking and entertainment, and those who align their identities with groups of other audience members use Twitch for social motivations. A novel finding of this research was that whether or not people identified with Group or Broadcasters, they all experience para-social feelings towards the Broadcaster, and also a sense of community with other members, are driven by conformity motivations in their use of the platform, and share a co-experience of Twitch with other users in their group also.
Chang, Y. P., & Zhu, D. H. (2011). Understanding social networking sites adoption in China: A comparison of pre-adoption and post-adoption. Computers in Human Behavior, 27(5), 1840–1848. https://doi.org/10.1016/j.chb.2011.04.006
Chiu, C.-M., Hsu, M.-H., & Wang, E. T. G. (2006). Understanding knowledge sharing in virtual communities: An integration of social capital and social cognitive theories. Decision Support Systems, 42(3), 1872–1888. https://doi.org/10.1016/j.dss.2006.04.001
Hilvert-Bruce, Z., Neill, J. T., Sjöblom, M., & Hamari, J. (2018). Social motivations of live-streaming viewer engagement on Twitch. Computers in Human Behavior, 84, 58–67. https://doi.org/10.1016/j.chb.2018.02.013
Hu, M., Zhang, M., & Wang, Y. (2017). Why do audiences choose to keep watching on live video streaming platforms? An explanation of dual identification framework. Computers in Human Behavior, 75, 594–606. https://doi.org/10.1016/j.chb.2017.06.006
Jenkins, H. (2006). Fans, bloggers, and gamers exploring participatory culture. New York: New York University Press.
Kang, Y. S., Hong, S., & Lee, H. (2009). Exploring continued online service usage behavior: The roles of self-image congruity and regret. Computers in Human Behavior, 25(1), 111–122. https://doi.org/10.1016/j.chb.2008.07.009
Lim, S., Cha, S. Y., Park, C., Lee, I., & Kim, J. (2012). Getting closer and experiencing together: Antecedents and consequences of psychological distance in social media-enhanced real-time streaming video. Computers in Human Behavior, 28(4), 1365–1378. https://doi.org/10.1016/j.chb.2012.02.022
Liu, S., Liao, J., & Wei, H. (2015). Authentic Leadership and Whistleblowing: Mediating Roles of Psychological Safety and Personal Identification. Journal of Business Ethics, 131(1), 107–119. https://doi.org/10.1007/s10551-014-2271-z
Masayuki Yoshida, Bob Heere, & Brian Gordon. (2015). Predicting Behavioral Loyalty through Community: Why Other Fans Are More Important Than Our Own Intentions, Our Satisfaction, and the Team Itself. Journal of Sport Management, 29(3), 318–333. https://doi.org/10.1123/jsm.2013-0306
Mcmillan, D., & Chavis. (1986). Sense of community: A definition and theory. 18.
Peterson, N., Speer, P., & Mcmillan, D. (2008). Validation of a Brief Sense of Communtiy Scale: Confirmation of the Principal Theory of Sense of Community. Journal of Community Psychology, 36, 61–73. https://doi.org/10.1002/jcop.20217
Postmes, T., Spears, R., & Lea, M. (1998). Breaching or Building Social Boundaries?: SIDE-Effects of Computer-Mediated Communication. Communication Research, 25(6), 689–715. https://doi.org/10.1177/009365098025006006
Shamir, B., Zakay, E., Breinin, E., & Popper, M. (1998). Correlates of Charismatic Leader Behavior in Military Units: Subordinates’ Attitudes, Unit Characteristics, and Superiors’ Appraisals of Leader Performance. Academy of Management Journal, 41(4), 387. https://doi.org/10.2307/257080
Tajfel, & Turner. (1979). An integrative theory of intergroup conflict.
Xiao, H., Li, W., Cao, X., & Tang, Z. (2012). The Online Social Networks on Knowledge Exchange: Online Social Identity, Social Tie and Culture Orientation. Journal of Global Information Technology Management, 15(2), 4–24. https://doi.org/10.1080/1097198X.2012.11082753
Here is the link to my presentation from January 2020, which is about ten minutes long and is a pre-viz, to help convey what the world would be like if my product was live. I presented this live like a TED Talk but this is the narrated version. The capability would take some years to develop but well worth pursuing.
Playing around with Keras to create a fashion image classifier
Those who shop for fashion online know the frustration of searching and trawling through multiple sites looking for something in particular, and when you finally do find it, it’s out of stock in your size, and you must start all over again.
I dream of one day selling my search plug-in to Google to find and curate clothing from online sites that are in stock, are the right size, are in my budget, and all the other factors that I’m searching for.
To enable this, I would build a tool that uses search terms, and/or an image or a description of the item and it will search the web for me.
This project is a prototype to see how one would go about doing this, and whether machine learning makes it at all feasible.
Focusing on the image recognition aspect of the problem, I have built my own fashion data set from searching the internet and built and tuned machine learning models (convolutional neural networks (CNNs)) to see which works best for finding the images I am searching for.
My proposal comes about from a desire to solve a personal pain point, as I am a prolific online shopper. I’ve recently been encouraged by Google’s own product development for Google Search. Whenever people perform searches regularly, Google eventually brings out a specific tool for each kind of search, such as directions in Google Maps, and more recently, the ability to search airlines and book flights and hotels. I hope that this enhanced Fashion Search tool is just around the corner, but in the meantime, I will build my own.
The research question for this paper is “what is the best performing Machine Learning solution to accurately classify fashion images?”
The two primary deliverables of this project are:
Creation of a labelled data set for use in my model,
An evaluation of machine learning and deep learning models for Fashion Image classification,
Being a team of one, my instructions for this project as outlined in class by Professor Muslea is to apply 3-5 machine learning algorithm to my dataset, and then experiment to improve the out-of-the-box results.
Due to the availability of online tutorials and documentation, I chose to use Keras with a Tensorflow back end, using Python language to build my data set and models.
The midterm objective was to build the initial small data set and train and evaluate two machine learning models end to end, which I accomplished, and whose methodology and results will be outlined below and in Section V.
The objective of the final paper was to expand the data set to ten classes like Fashion MNIST , develop more models, and improve the accuracy of the models, with the benchmark for performance being estimated human accuracy of 95%. Since the initial plan, I decided rather than spend time on routine work such as expanding my dataset to 10 classes, I have instead focused on transfer learning: fine tuning the VGG16  model and the deeper CNN Resnet50 to gain practical experience engineering deep learning models.
1) Creation of the dataset
The creator of Keras, Francois Chollet  outlined in the Keras blog an image classification CNN with over 94% accuracy on as little as 1000 images per class. Therefore, my objective was to obtain a minimum of 1000 images per class for my data set.
Initially, I scraped 100 images for each of three classes: Dresses, Pullovers and Shirts.
Unfortunately, the current method I am using has a limit of 100 images  per search term.
To bring the data set up to 1,000 images per class, I specified the colors for each search i.e. red dress, blue dress, yellow dress and so on, to work around the limit. The search term was the folder the images were placed in, and once arranged into the 3 classes (dresses, shirts and pullovers), become the class labels.
2) Data pre-processing
The dataset required cleaning as some images were unreadable. Then I utilized data augmentation using Keras Image Data Generation  to change the images to bring the total images per class to 1000. If required, in future I could perform web scraping using Selenium web driver , or try using Bing Image API to more quickly increase the size of the dataset, which doesn’t have this limitation.
Keras Image Data Generation  takes each image and distorts it to create slightly different versions that are still useful for training the machine learning algorithms.
The Keras GitHub page  has code to augment the images for the cats and dogs Kaggle dataset, which I have adapted for my data set as show in Figure 1 below.
I used Keras flow.from.data function to enable preprocessing these 224 x 224 images into the 255-pixel scale. This function can also augment the images in multiple other ways, such as rotating or shifting the image to enable training on more images even though the dataset is small. After the midterm, I also changed the shape of the image dataset from a 3D to a 2D array to give me access to other code templates for calculating test loss and accuracy, which I was struggling to do in some cases when completing my midterm paper .
The other dataset I used is ImageNet ,  indirectly, because both VGG and Resnet 50 are pre-trained on ImageNet., . ImageNet has 1000 classes of images, including items of apparel and at least 1000 images per class.
1) Dataset split
In order to ensure the accuracy of the measurements of model performance, I performed training and validation using two different splits of my dataset. 20% (600) of the images were held back as the test set in both cases. For the remaining 80% of data, I split the training and validation sets 80/20 for the initial VGG16 model, the tuned VGG16 model and the Resnet50 model (outlined in Part B below).
Dietterich  recommends splitting training and validation data 50/50, therefore I also ran the VGG16 model (which was the best performing, as will be explained in Section V) using the 50/50 split recommended. This ensures no overlap between the training and validation data because in the first run, 50% is training data, then that same 50% is used as validation data in the second run.
2) Limitations of dataset
The dataset is just three classes: dress, pullover and shirt. These items are quite similar, and there is some mislabeling within the dataset. This has been accommodated within the allowance for 5% error rate.
My research question requires the use of a multi-class classification model, and therefore there are certain functions that are useful in this case.
At the time of the mid-term paper draft deadline, I had implemented a basic CNN  and also a VGG-16 pre-trained model  as shown in Figure1. This was based on code from deeplizard on YouTube . I applied transfer learning from the weights learned by this model on ImageNet data to my Fashion dataset.
Each hidden layer improves the generalizability of the model, and therefore should improve the accuracy on the test set.
After completing the midterm, the results indicated that there was too much bias in my model. Therefore, I took two courses of action to improve the performance. Firstly, I decided to tune the hyperparameters of the VGG16 model, and secondly trial a deeper Resnet50 model  with 50 rather than 16 hidden layers (also with pre-trained weights on the ImageNet dataset). These two models were adapted from the OpenCV website and code provided by Mallick .
In order to fine tune the models, I applied dropout to the convolutional layers, and changed the learning rate, and as shown in Figure 4 this improved the accuracy significantly. 
Resnet50 is a CNN with many more layers than VGG16, however it deals with the vanishing gradient problem that comes from deep layers by applying the identity matrix to allow the gradient to be passed through each convolution .
A. Performance Metrics
In order to benchmark model performance, human accuracy is estimated to be 95%. 100% isn’t likely, as the class of some items may be debatable (remember the blue/black vs white/gold dress internet craze?), and there is some mislabeling in the dataset.
In this project, machine learning performance is measured twice.
Firstly, the performance of the model after learning on the training set is measured on the validation set, and the metric used is validation loss (categorical cross entropy) and accuracy. The model is trained over 20 epochs twice. The second time performance is measured is on the unseen test set, and the metric is categorical cross entropy loss and accuracy.
In order to draw conclusions about the accuracy of my model on unseen data in future, I calculated the accuracy range at 95% confidence using t-scores, because the accuracy rate of the entire population is not known .
1) Midterm results
Parameters and results for the two models I evaluated for the midterm are shown in Figure 4. I had adapted the code for these two models from deeplizard. Through changing the learning rate for the Basic CNN from 0.001 to 0.01, validation accuracy performance improved from basically worse than chance (25% ) to chance 33%. But then it did not change over the epochs, as shown in Figure 2. The same result was visible when I increased the training and validation epochs to 20.
The basic CNN is essentially predicting the same class every time, bias is very high and therefore the accuracy is very low, as shown in the confusion matrix in Figure 3.
The VGG16 model  is much more expressive, and by adding the many hidden layers of this convnet which has been pre-trained on 1000 classes of the ImageNet data set, as well as increasing my own dataset from 100 to 1000 images per class, I was able to achieve 78% validation and 76% test accuracy, which is a much better result. VGG16v1 model is likely to achieve accuracy in the range of 72-78% at 95% confidence on an unseen dataset.
Still, there was room to make the model more expressive and bring the results up to 95%.
A. Final Results
The three models I evaluated for the final phase of the project are shown in Figure 4, and a graph of the measurement of validation accuracy for all 2×20 training epochs are shown in Figure 5. Once I had adapted the code from Mallick , accuracy for VGG16 immediately improved, up to human level. This code included RMSprop for the optimization function, dropout, and a much smaller learning rate. This was extremely exciting.
VGG16 v2 used the 80/20 split of training and validation data and is likely to achieve accuracy in the range of 85-100% at 95% confidence on an unseen dataset.
VGG16 v3 however split the data 50/50 so training data was significantly reduced, and accuracy reduced accordingly. This model is likely to achieve accuracy in the range of 58-91% at 95% confidence on an unseen dataset.
Resnet50 did not perform as well as the VGG16 models. This model is likely to achieve accuracy in the range of 57-80% at 95% confidence on an unseen dataset.
20 x 2
20 x 2
20 x 2
20 x 2
Relu (hidden) SoftMax (final)
Relu (hidden) SoftMax (final)
Relu (hidden) SoftMax (final)
Relu (hidden) SoftMax (final)
Relu (hidden) SoftMax (final)
Categorical cross entropy
Categorical cross entropy
Categorical cross entropy
Categorical cross entropy
Categorical cross entropy
Validation test accuracy range with 95% confidence
Figure 4 Final Results
Basic CNN with limited inputs and only one hidden layer had high bias and essentially only performed with accuracy at the rate of chance.
A deep CNN like VGG16 is much more expressive, and not been overfit as I conducted training on 60% of the data, utilized 20% of the data for a validation set, and tested on 20%. This can be seen by the closeness of accuracy results of validation and test sets and achievement of human level accuracy of 95%. Adding in dropout to the layers drastically improved performance, as well as changing the optimizer from Adam to RMSprop and reducing the learning rate to a much smaller number (see Figure 4). Perhaps further hyperparameter tuning such as learning rate decay might improve the lower bound of the accuracy confidence interval to above 85%, but given the achievement of human level accuracy, I decided to stop here for the purpose of this assignment. Upon evaluating the errors, it was clear that some classifications are debatable as shown in Figure 5 and 6. Therefore, multiple classes should be assigned to the same image in order for this to work well as a search tool for Google. There was also a repetition of errors through using data augmentation, because when an augmented image was used more than once (with different variations), this multiplied any errors by the same magnitude.
However, the Resnet50 model with even more layers surprisingly did not achieve the same level of performance, so this model implementation may benefit from hyperparameter tuning. Again, for the purpose of this project, I did not continue as VGG16 v2 achieved such great results.
The next phase for this project would be to remove all labels and use my Fashion dataset to explore multiclass Active Learning models , and possibly utilize the code developed by Google . This could potentially overcome the high cost of manually labelling images with multiple labels, to account for the differences in opinion in what to label an image. My revised target would be to reduce the variability in the confidence interval, rather than 85-100%, I would like to see a minimum of 95% with 95% confidence.
Based on this analysis of machine learning models focusing on convolutional neural networks, the VGG16 model with dropout (v2) performed the best for classifying fashion images in terms of accuracy and is likely to achieve accuracy in the range of 85-100% at 95% confidence on an unseen dataset. This performance is significantly better than VGG16 v1 without dropout, and Resnet50 for this dataset and therefore the likely performance on future unseen datasets. Further work to develop a multi-class active learning model could improve accuracy even more by increasing the lower bound of the confidence interval to a minimum of 95%.
 H. Xiao, K. Rasul, and R. Vollgraf, “Fashion-MNIST: a Novel Image Dataset for Benchmarking Machine Learning Algorithms,” ArXiv170807747 Cs Stat, Aug. 2017.
 “A VGG-like CNN in keras for Fashion-MNIST with 94% accuracy.” .
 K. He, X. Zhang, S. Ren, and J. Sun, “Deep Residual Learning for Image Recognition,” ArXiv151203385 Cs, Dec. 2015.
 T. Dietterich, “Approximate statistical tests for comparing supervised classification learning algorithms,” Neural Comput., vol. 10, no. 7, pp. 1895–1923, 1998.
 Y. LeCun, L. Jackel, L. Bottou, A. Brunot, and C. Cortes, “COMPARISON OF LEARNING ALGORITHMS FOR HANDWRITTEN DIGIT RECOGNITION,” p. 9.
 deeplizard, Create and train a CNN Image Classifier with Keras. .
 J. Brownlee, “Gentle Introduction to the Adam Optimization Algorithm for Deep Learning,” Machine Learning Mastery, 02-Jul-2017. .
 D. Rumsey, “How to Calculate a Confidence Interval for a Population Mean with Unknown Standard Deviation and/or Small Sample Size,” dummies. .
 Y. Yang, Z. Ma, F. Nie, X. Chang, and A. G. Hauptmann, “Multi-Class Active Learning by Uncertainty Sampling with Diversity Maximization,” Int. J. Comput. Vis., vol. 113, no. 2, pp. 113–127, Jun. 2015.
 Google, Contribute to google/active-learning development by creating an account on GitHub. Google, 2018.
This post discusses accountability, ethics and professionalism in data science (DS) practice, considering the demands and challenges practitioners face. Dramatic increases in the volume of data captured from people and things, and the ability to process it places Data Scientists in high demand. Business executives hold high hopes for the new and exciting opportunities DS can bring to their business, and hype and mysticism abounds. Meanwhile, the public are increasingly wary of trusting businesses with their personal data, and governments are implementing new regulation to protect public interests. This paper asks whether some form of professional ethics can protect data scientists from unrealistic expectations and far reaching accountabilities.
Demand for DS skills is off the charts, as Data Scientists have the potential to unlock the promise of Big Data and Artificial Intelligence.
As much of our lives are conducted online, and everyday objects are connected to the internet, the “era of Big Data has begun.”(boyd & Crawford 2012). Advancements in computing power, and cheap cloud services mean that vast amounts of digital data are tracked, stored and shared for analysis (boyd & Crawford 2012), and there is a process of “datafication” as this analysis feeds back into people’s lives (Beer 2017).
Concurrently, Artificial Intelligence (AI) is gaining traction through successful use of statistical machine learning and deep learning neural networks for image recognition, natural language processing, and games and dialogue question and answer (Elish & boyd 2017). AI now permeates every aspect of our lives in chatbots, robotics, search and recommendation services, automated voice assistants and self-driving cars.
Data is the new oil, and Google Amazon Facebook and Apple (GAFA) are in control of vast amounts of it. Combined with their network power, this results in super normal profits: US$25bn net profit amongst them in the first quarter of 2017 alone (the Economist 2017). Tesla, which made 20,000 self-driving cars in this time, is worth more than GM which sold 2.5m (the Economist 2017).
Furthermore, traditional industries such as government, education, healthcare, financial services, insurance, retailers, and functions such as accounting, marketing, commercial analysis and research who have long used statistical modelling and analysis in decision making are harnessing the power of Big Data and AI which supplements or replaces “complex decision support in professional settings (Elish & boyd 2017).
All these factors drive incredible demand from organisations, and results in a shortage of supply of Data Scientists.
With this incredible appetite for and supply of personal data, individuals, government, and regulators are increasingly concerned about threats to competition (globally), personal privacy and discrimination, as DS, algorithms and big data are neither objective or neutral (Beer 2017) (Goodman & Flaxman 2016). These must be understood as socio technical concepts (Elish & boyd 2017), and their limitations and shortcomings well understood and mitigated.
To begin with, the process of summarizing humans into zeros and ones removes context, therefore, contrary to popular mythology about Big Data, the larger the data set, the harder it is to know what you are measuring (Theresa Anderson n.d.; Elish & boyd 2017). Rather, DS practitioner has to decide what is observed, recorded, included in the model, how the results are interpreted, and how to describe its limitations (Elish & boyd 2017; Theresa Anderson n.d.).
“All too often, limitations in the data mean that cultural biases and unsound logics get reinforced and scaled by systems in which spectacle is prioritised over careful consideration”. (Elish & boyd 2017)
In addition, profiling is inherently discriminatory, as algorithms sort, order, prioritise, and allocate resources in ways that can “create, maintain or cement norms and notions of abnormality” (Beer 2017) (Goodman & Flaxman 2016). Statistical machine learning scales normative logic (Elish & boyd 2017), and biased data in means biased data out, even if protected measures are excluded but correlated ones are included. Systems are not optimised to be unbiased, rather the objective is to have better average accuracy than the benchmark (Merity 2016).
Lastly, algorithms by their statistical nature are risk averse, and focus where they have a greater degree of confidence (Elish & boyd 2017; Theresa Anderson n.d.) (Goodman & Flaxman 2016), exacerbating the underrepresentation of minorities that exist in unbalanced training data (Merity 2016).
In response, the European Union announced an overhaul of their Data Protection regime from a Directive to the far reaching General Data Protection Regulation. Slated to be law by April 2018, this regulation protects the rights of individuals, including citizens right to be forgotten, and securely store their data, but also the right to an explanation of algorithmic decisions that significantly affect an individual (Goodman & Flaxman 2016). The regulations prohibit decisions made entirely by automated profiling and processing, and will impose significant fines for non-compliance.
Indeed, companies are currently reorganising themselves to protect the data assets they are amassing, reflecting the increased need for data security, ethics and accountability. Two recent additions to the Executive suite are the Chief Information Security Officer and the Chief Data Officer, who are responsible for ensuring organisations meet their legal obligations for data security and privacy.
DS practitioners must overcome many challenges to meet these demands for accountability and profit. It all boils down to ethics. Data scientists must identify and weigh up the potential consequences of their actions for all stakeholders, and evaluate their possible courses of action against their view of ethics or right conduct (Floridi & Taddeo 2016).
Algorithms are machine learning, not magic (Merity 2016), but the media and senior executives seem to have blind faith, and regularly use “magic” and “AI” in the same sentence (Elish & boyd 2017).
In order to earn the trust of businesses and act ethically towards the public, practitioners must close the expectation gap generated by recent successful (but highly controlled) “experiments-as-performances”, by being very clear about the limitations of their DS practices. Otherwise DS will be snake oil, and collapse under the weight of the hype and these unmet expectations (Elish & boyd 2017), or breach regulatory requirements and lose public trust trying to meet them.
The accountability challenge is compounded in multi-agent, distributed global data supply chains, as accountability and control are hard to assign and assert (Leonelli 2016), the data may not be “cooked with care” but the provenance and assumptions within the data are unknown (Elish & boyd 2017; Theresa Anderson n.d.).
Furthermore, cutting edge DS is not a science in the traditional sense (Elish & boyd 2017), where hypotheses are stated and tested using scientific method. Often, it really is a black box (Winner 1993), where the workings of the machine are unknown, and hacks and short cuts are made to improve performance without really knowing why these work (Sutskever, Vinyals & Le 2014).
This makes the challenge of making the algorithmic process and results explainable to a human almost impossible in some networks (Beer 2017).
Lastly, the social and technical infrastructure grows quickly around algorithms once they are out in the wild. With algorithms powering self-driving cars and air traffic collision avoidance systems, ignoring the socio-technical context can have catastrophic results. The Überlingen crash in 2002 occurred because there was limited training on what controllers should do when they disagreed with the algorithm (Ally Batley 2017; Wikipedia n.d.). Data scientists have limited time and influence to get the socio technical setting optimised before order and inertia sets in, but the good news is that the time is now, whilst the technology is new (Winner 1980).
Indeed, the opportunities to use DS and AI for the betterment of society are vast. If data scientists embrace the uncertainty and the humanity in the data, they can make space for human creative intelligence, whilst at the same time respecting those who contributed the data, and hopefully create some real magic (Theresa Anderson n.d.).
So how can DS practitioners equip themselves to take on these challenges and opportunities ethically?
Historically, many other professions have formed professional bodies to provide support outside of the influence of the professional’s employer. The members sign codes of ethics and professional conduct, in vocations as diverse as designers, doctors and accountants (The Academy of design professionals 2012; Australian Medical Association 2006; CAANZ n.d.).
“A profession is a disciplined group of individuals who adhere to ethical standards and who hold themselves out as, and are accepted by the public as possessing special knowledge and skills in a widely recognised body of learning derived from research, education and training at a high level, and who are prepared to apply this knowledge and exercise these skills in the interest of others. It is inherent in the definition of a profession that a code of ethics governs the activities of each profession. Such codes require behaviour and practice beyond the personal moral obligations of an individual. They define and demand high standards of behaviour in respect to the services provided to the public and in dealing with professional colleagues. Further, these codes are enforced by the profession and are acknowledged and accepted by the community.” (Professions Australia n.d.)
The central component in every definition of a profession is ethics and altruism (Professions Australia n.d.), therefore it is worth exploring professional membership further as a tool for data science practitioners.
Current state of DS compared to accounting profession
Let us compare where the nascent DS practice is today with the chartered accountant (CA) profession. The first CA membership body was formed in 1854 in Scotland (Wikipedia 2017a), long after double entry accounting was invented in the 13th century (Wikipedia 2017b). Modern data science began in the mid twentieth century (Foote 2016), and there is as yet no professional membership body.
Current CA membership growth rate is unknown, but DS practitioner growth is impressive. In 2016, there were 2.1M licensed chartered accountants, (Codd 2017), not including unlicensed practitioners such as bookkeepers, or Certified Practicing Accountants. IBM predicts there will be 2.7M data scientists by 2020 (Columbus n.d.; IBM Analytics 2017), predicting 15% growth annually.
The standard of education is very high in both professions, but for different reasons. Chartered Accountants have strenuous post graduate exams to apply for membership, and requirements for continuing professional education (CAANZ n.d.).
DS entry levels are high too, but enforced by competitive forces only. Right now, 39% of DS job openings require a Masters or Ph.D (IBM Analytics 2017), but this may change over time as more and more data scientists are educated outside of universities.
The CA code of ethics is very stringent, requiring high standards of ethical behaviour and outlining rules, and membership can be revoked if the rules are broken (CAANZ n.d.) CAs must treat each other respectfully, and act ethically and in accordance with the code towards their clients and the public.
The Data Science Association has a fledgling code of conduct, but unlike CAs, membership is not contingent on adhering to this code, and there are no penalties for non-compliance (Data Science Association n.d.).
There is another reason comparison with CA profession is interesting.
Like accounting, DS is all about numbers, and seems like a quantitative and objective science. Yet there is compelling research to indicate both are more like social sciences, and benefit from being reflexive in their research practices (boyd & Crawford 2012; Elish & boyd 2017; Chua 1986, 1988; Gaffikin 2011). Also like accountants (Gallhofer, Haslam & Yonekura 2013), DS practitioners could suffer criticism for being long on practice and short on theory.
Therefore, DS should look hard at the experience of accountants and determine if, and when becoming a profession might work for them.
DS practitioners’ ethics should address three areas:
“Data ethics can be defined as the branch of ethics that studies and evaluates moral problems related to data (including generation, recording, curation, processing, dissemination, sharing and use), algorithms (including artificial intelligence, artificial agents, machine learning and robots) and corresponding practices (including responsible innovation, programming, hacking and professional codes), in order to formulate and support morally good solutions (e.g. right conducts or right values).” (Floridi & Taddeo 2016)
It is conceivable that individually, DS practitioners could be ethical in their conduct, without the large cost in time and money of professional membership.
Data scientists are very open about their techniques, code and results accuracy, and welcome suggestions and feedback. They use open source software packages, share their code on sites like GitHub and BitBucket, contribute answers on Stack Overflow, blog about their learnings and present and attend Meet Ups. It’s all very collegiate, and competitive forces drive continuous improvement.
But despite all this online activity, it is not clear whether they behave ethically. They do not readily share data as it is often proprietary and confidential, nor do they share the substantive results and interpretation. This means it is difficult to peer review or reproduce their results, and be transparent about their DS practices to ascertain if they are ethical or not.
A professional body may seem like a lot of obligations and rules, but it could provide the data scientists some protection and more access to data.
From the public’s point of view, a profession is meant to be an indicator of trust and expertise (Professional Standards Councils n.d.). Unlike other professions, the public would rarely directly employ the services of a data scientist, but they do give consent for data scientists to collect their data (“oil”).
Becoming a professional body and adopting a code of professional conduct is one way to earn public trust and the right to access and handle personal data (Accenture n.d.). It can also help pool resources (and facilitate self-employment) so it may open more doors to data scientists, and allow them to pursue initiatives that are altruistic and socially preferable (Floridi & Taddeo 2016).
Keeping ethics at the forefront of decision making actually makes for good leaders who can navigate conflict and ambiguity (Accenture n.d.), and result in good financial results (Kiel 2015).
With the growing regulatory focus on data and data security, it is foreseeable soon that CDO and CISO may be subject to individual fines and jail time penalties like Chief Executive and Chief Financial Officers are with regards to Sarbanes Oxley Act Compliance (Wikipedia 2017c). Professional membership can provide the training and support needed to keep practitioners up to date, in compliance and out of jail.
Lastly, right now, the demand for DS skills far outweigh supply. Therefore, despite the significant concentration in DS employers, the bargaining power of some individual data scientists is relatively high. However, they have no real influence over how their work is used: their only option in a disagreement is to resign. Over the medium term, supply will catch up with demand, and then even the threat of resignation will become worthless.
Data Science Association n.d., ‘Data Science Association Code of Conduct’, Data Science Association, viewed 13 November 2017, </code-of-conduct.html>.
Elish, M.C. & boyd, danah 2017, Situating Methods in the Magic of Big Data and Artificial Intelligence, SSRN Scholarly Paper, Social Science Research Network, Rochester, NY, viewed 19 November 2017, <https://papers.ssrn.com/abstract=3040201>.
Floridi, L. & Taddeo, M. 2016, ‘What is data ethics?’, Phi.Trans.R.Soc.A, no. 374:20160360.
Gaffikin, M. 2011, ‘What is (Accounting) history?’, Accounting History, vol. 16, no. 3, pp. 235–51.
Gallhofer, S., Haslam, J. & Yonekura, A. 2013, ‘Further critical reflections on a contribution to the methodological issues debate in accounting’, Critical Perspectives on Accounting, vol. 24, no. 3, pp. 191–206.
Goodman, B. & Flaxman, S. 2016, ‘European Union regulations on algorithmic decision-making and a ‘right to explanation’’, arXiv:1606.08813 [cs, stat], viewed 13 November 2017, <http://arxiv.org/abs/1606.08813>.
When your community grows so much, you no longer recognise it
In August, I read a Wired story about social media influencers migrating some of their audience to membership sites like OnlyFans and Patreon to get paid for their content: content which is exclusive and risqué and doesn’t meet Instagram and Facebook’s community standards (Parham, 2019). Many influencers complain that Facebooks guidelines are opaque, arbitrary and basically censorship (#freethenipple is a hashtag often used to protest the censorship of women’s bodies (Rúdólfsdóttir & Jóhannsdóttir, 2018)). They are censored not only by the community guidelines but some of their own followers who report them (for an example see @tealecoco, 2019). In response, they migrate some of their audience to sites like OnlyFans. Now I know some theories that explain this situation through my CMGT530 class.
is an online community where influencers could express themselves, and fans
interact with each other as well as the influencer. With OnlyFans the
interaction is influencer to one fan or many. Instagram has experienced massive
growth recently, and when influencers have public profiles (nil entry costs),
the influx of new members can dramatically change the community norms (Hirschman, 1970).
Older members do not trust the newer ones (Donath, 1996),
and new ones don’t act in accordance with the unwritten rules of the community (Kim, 2000; Meyrowitz, 1985).
There are as many expectations on the influencer as there are followers due to
the SIDE effects (Walther, 2006),
and there is a lot of conflict and groups regularly splinter off (Jenkins, 2006; Kim, 2000;
Where once Instagram was perhaps backstage and a safe space for influencers, it
has become front stage (Meyrowitz, 1985)
and behaviours more formal and mainstream. Hence the appeal of OnlyFans.
the influencers in the article like to keep their risque OnlyFans persona
separate from their more public Instagram persona, and don’t want the two to
mix. This is explained by Meyrowitz as how we have social situations and roles
in those situations and we feel awkward and uncomfortable if those situations
and roles merge (Meyrowitz, 1985).
G., & Jóhannsdóttir, Á. (2018). Fuck patriarchy! An analysis of digital
mainstream media discussion of the #freethenipple activities in Iceland in
March 2015. Feminism & Psychology, 28(1), 133–151.
September 22). 𝐄𝐕𝐈𝐋☽❍☾𝐀𝐍𝐆𝐄𝐋
|| Model/Designer (@tealecoco) • Instagram photos and videos. Retrieved
November 10, 2019, from Instagram website:
This is my reaction to material we discussed in my CMGT530 class at Annenberg: Social Dynamics of Communication Technology. The material was Czitrom (Czitrom, 1982) and the film Devil’s Playground and it’s Amish subjects (Walker, 2002).
The Amish people have a philosophy of Ordnung where they try to slow down or reject technology that may pollute their traditions (Amish America, 2019). Czitrom wrote of the telegram’s impact on macro issues like corporate and government power (Czitrom, 1982). This made me think about today’s technology and how it was used in a murder case in California, described in October 2019 Wired Magazine (Smiley, 2019). It raises the question whether admitting as evidence of data of modern devices puts the underlying tenet of “innocent until proven guilt” in criminal proceedings at risk.
In Wired October 2019 issue, I read about Tony Aiello, a frail 4’11’ Californian in his 90s who died last month in jail awaiting trial (updated in online story) (Smiley, 2019). Accused of brutally murdering his stepdaughter Karen, he died before his guilt or innocence could be determined (Smiley, 2019). A neighbor’s doorbell camera placed Tony at the scene for a crucial 20 min period during which Karen’s Fitbit registered heart rate accelerating and then dropping to none at all. DNA and other evidence led to Tony being put in jail.
I have previously researched how wide DNA database searches and wide facial recognition database searches could lead to coincidental matches (a la the birthday paradox) and false positives, resulting in innocent people having to defend themselves in court and even serving prison time (Keys, 2017). However, this was different, as Tony was a suspect very early on. Nevertheless, device data and expert testimony can be incomprehensible to jury members and also accepted without understanding, even with all its flaws and without establishing motive (Gibson 2017).
With each new technology it’s really important to establish the characteristics of the devices and their data quality before admitting it, if “innocent until proven guilty” and justice is to prevail in our courts in future.
How three forces: the explosion of individual images available online, the accelerating data science capabilities of image processing, and pressure on individual rights and freedoms impact the use of image recognition in surveillance in crime prevention and criminal prosecution. Covers the potential risks of reliance on this kind of visual evidence, and recommendations to reduce these risks to society.
We are living in an “Age of Surveillance”
Surveillance is an age-old tool of crime prevention, and through the analysis of video and still images, provides the basis for prosecution in some cases today for individual and national security crimes. Despite strong lobbying against it, general surveillance by government and corporations has seen an unprecedented increase in recent years (New South Wales et al. 2001). This surveillance occurs at your work place, on the street, in public venues, in supermarkets, at the airport, but also through analysis of what you post publicly on the internet through social media. The ability to conduct surveillance effectively is driven by three forces: the explosion in images available in databases, the image processing capability of data science and the erosion of individual rights. Image Databases are growing exponentially The number of databases with videos and images of people is growing exponentially. Firstly, due to the increased use of CCTV for general surveillance. CCTV has been around since the 1960s, but it has outgrown being closed circuit and on a television, and is now any “monitoring system that uses video cameras .. aimed at preventing and detecting crime through general (not targeted) surveillance. “ (Gibson 2017). Government at all levels use CCTV to deter and detect crime, and its not just fixed cameras but also cameras attached to the bodies of law enforcement agents. Whilst surveillance is an unpleasant fact, many corporations and public-sector organisations gather data on individuals for other purposes, such as marketing, customer service, problem solving, and product development. Individuals often willing consent to the collection of this data, in return for their services. However many individuals do not understand the terms and conditions they are agreeing to when providing their consent (Sedenberg & Hoffmann 2016). Indeed, as our lives are increasingly conducted online, and cloud computing makes storage cheaper, and faster, our activities are tracked, recorded and stored by corporations and governments (Hern 2016; boyd & Crawford 2012; Sedenberg & Hoffmann 2016). As a result of general surveillance and the voluntary provision of images and video over social media, your image is now stored in databases online by governments and corporates. Image Processing capability is growing rapidly also The capability to analyse all these images has made great progress in recent years also, making it possible for machines to process of petabytes of surveillance images to identify individuals. 4 Over the last five years, using deep learning convolutional neural networks (ConvNets), image processing capabilities have progressed from image classification tasks (Krizhevsky, Sutskever & Hinton 2012) using large image databases like ImageNet, to human re-identification using Siamese Neural Networks and contrastive difference to be able to accurately recognise faces they have only seen once before, and in real time (Koch, Zemel & Salakhutdinov 2015; Varior, Haloi & Wang 2016). The YOLO object identification and classification network ( You Only Look Once) are achieving fast processing speeds in real time and competitive accuracy (Redmon et al. 2015). Recurrent neural networks such as long short term memory networks have also proved able to identify objects in video sequences and caption them (Lipton, Berkowitz & Elkan 2015), however this is not in real time. In 2013, Ian Goodfellow developed generative adversarial networks (GANs), where two ConvNets are trained simultaneously, one to generate artificially created images, and the other to discriminate between real images and generated ones (Goodfellow et al. 2014). And in the last two years, both Google and Facetime Artificial Intelligence teams have independently developed the ability to create images using ConvNets (Mordvintsev, Olah & Tyka 2015; Chintala 2015). Lastly, the processing power available to data scientists is growing rapidly, through advancements in graphic processing unit (GPU) speed and the availability of cloud computing, enabling analysis of extremely large data sets without huge investment in compute power. The speed of development is incredibly fast in this deep learning field, and it is very conceivable that products will be developed in the next 10 years that could productionise and scale these automated image recognition and generation capabilities for use by corporations, government and law enforcement for use in surveillance for crime prevention, detection and prosecution. The ready availability of image databases, and the advancements in data science image processing capability is not enough without the right of corporations and governments to use this data for general (not targeted) surveillance). This third force is also increasingly becoming a reality in recent years. Erosion of Individual Rights There are several ways our rights are being eroded. Individual rights to privacy are being eroded voluntarily, as we give away licenses to our own images, and involuntarily through legislation or court decisions enacting crime prevention and national security measures. More images of our daily life are captured through our phones and posted to social media. Technically, you own these images and can control their usage (Wikipedia 2017) (US Copyright Office n.d.; Orlowski n.d.). However, while you own the copyright of the images you have created, you have probably already given Facebook and Amazon permission to profit from your image and images you own, through a very wide-ranging license to store and use it (Facebook n.d.). Private organisations are using the data gathered on their users for research, however these organisations are outside of the ethics required by government on education and health institutions 5 (Sedenberg & Hoffmann 2016). The profit motive of these companies could undermine privacy and security of your data (Sedenberg & Hoffmann 2016). On the personal data level, there are some serious attempts at protecting the rights of the individual. The General Data Protection Regulation of the European Union which comes into effect April 2018, covers all data captured from EU citizens. It codifies the “right to be forgotten”, and “the right to an explanation” for the result of any algorithms (Goodman & Flaxman 2016). However, these regulations do not seem to matter when it comes to national security. However, Edward Snowden and Wikileaks revealed that organisations like Yahoo and Google have been compelled in the United States courts and in Europe to hand over your data to government bodies for national security surveillance (Wikipedia 2018). It is quite feasible that Apple, Facebook and Amazon have the same obligations, and we just don’t know about it yet. The use of video cameras for general surveillance erodes an individual’s right to privacy, which although reduced in public, is still expected to some degree due to people’s perception of the “veil of anonymity” (Gibson 2017). It also indirectly erodes freedom of speech, as people are unable to express themselves without fear of reprisal (Gibson 2017). People often say they have nothing to hide when it comes to fighting against general surveillance, but this is predicated on society and government keeping the same values of today into the future. Once something is recorded online, either in image or text, it is there forever and could be used against you. This is something people from totalitarian regimes would be able to tell Westerners. Having online databases of images and advanced processing power combined with the erosion of individual right to privacy make the perfect conditions for an explosion in the use of image processing in criminal prevention, detection and prosecution. The next section focuses on the current and future use of image processing as a form of visual evidence in criminal prosecution. Uses of Image Processing in Criminal Prosecution Video and images are a form of visual evidence, whose purpose is to provide positive visual identification evidence (i.e it is the same person) , circumstantial identification evidence (i.e it is a similar person) or recognition evidence (I know that it is the same person in the image) that supports the case to prove that the accused is the offender (Gibson 2017). Computer image processing provides visual evidence in a number of ways. Firstly, its sheer processing power enables a very wide and deep search for this evidence within image databases or millions of hours of video. It also has useful capabilities in gathering video evidence. It can detect individuals across a range of different surveillance cameras as the offender moves through the landscape. Algorithms can be used to “sharpen” blurry images. YOLO image recognition can enable a person’s face to be found in a huge database of images using neural network architecture. Variable lighting, recording quality, movement of the camera, obstructions to line of sight, and other factors make for many interpretations of an image (Henderson et al. 2015). For this reason, an expert in “facial mapping” or “body mapping” usually examines the image and testifies in the court room, where they can be cross examined (Gibson 2017). The expert may not positively identify the defendant, so at other times, it is up to the juror to determine if the offender and the defendant are the same. 6 In future, as the database of images grow and the capability to use computer vision processing accelerates, I can imagine a huge facial image database similar to the DNA database collated in the USA in states like California (LA Times 2012), where instead of DNA samples, CCTV video images from a cold case will be matched to the database in order to track down a suspect. However, unlike DNA, where few people have their DNA recorded in the database, we are moving towards the entire population’s faces being recorded online somewhere, and most likely one day in the hands of law enforcement. What can we learn about the risks of the use of DNA forensic evidence and CCTV evidence to be sure that visual evidence procured through image processing will not create false positives and injustice? Limitations of Visual Evidence in Criminal Prosecution We begin by understanding the limitations of visual evidence for the jurors who must evaluate it in criminal trials. Video is a constructed medium, which can be interpreted in more than one, and even opposing, ways in the court room. After the lawyers for the 4 police officers accused of beating Rodney King deconstructed the eye witness video, 3 of the 4 were acquitted, yet public outcry was so intense that it led to the LA Riots (Gibson 2017). Unlike witnesses, video and images cannot be cross examined, however they are efficiently absorbed by the jury compared to witnesses who may be boring or too technical (Gibson 2017). When evidence is presented by an expert, jurors can suffer from the “white coat effect” which prejudices the juror to weight the experts evidence more heavily (Gibson 2017). Therefore, visual evidence is fraught with a lot of the issues that face forensic evidence more broadly, including DNA evidence. In the USA, since 1994 the FBI have been using the Combined DNA Index System (CODIS): a computer program that enables the comparison of DNA profiles in databases at the local, state, and national level (Morris 2010). Recently, CODIS has been used to search for suspects using DNA matches on cold cases, and a growing proportion of criminal cases are relying on these cold DNA database hits. Worryingly, there have been many examples of a miscarriage of justice, where match statistics were wildly wrong, yet heavily overweighted by the jury despite the accused having no means, motive or opportunity (Murphy 2015). We must explore the limitations of DNA evidence to understand what limitations there could be if image searches were used like this in the future. Like visual evidence, jurors must evaluate DNA evidence in criminal trials. DNA evidence is accompanied by random match probability (RMP) statistics: the likelihood of finding a DNA match by chance. There are many differences between the databases in CODIS: the collection process, accuracy of samples, the criteria for inclusion in the database and the statistical methods and programs used for analysis. (Morris 2010). These differences can lead to very different impacts on match statistics. Research has shown that a juror’s interpretation of the likelihood of a coincidental match also depends on how these statistics are presented (Morris 2010). The statistics are complicated, but 7 seemingly rare events can have surprisingly high likelihood if you present the probability of someone, somewhere matching, rather than the odds of a certain person matching. For example, the chance of any two people in a room having the same birth day and month is greater than 50% if there are more than 22 people in the room. This represents the database match probability. When the Arizona DNA database was searched for intra-database record to record matches they found multiple occurrences of the same DNA profile from different people. The wider the search, the greater the likelihood of a coincidental match, and Type I errors (false positives). Therefore, coincidental matches would be much more likely in a national or even global database of faces. Databases such as CODIS also suffer from ascertainment bias, due to their nonrandom sampling. There are currently 4 different ways of presenting these match statistics (3 of them court approved) with research finding widely different outcomes in terms of verdict (Morris 2010). Jurors fall prey to the prosecutors fallacy “drawing the inappropriate conclusion that a particular probability of chance occurrence is the same as the likelihood that the person incriminated by the statistics is innocent of the crime.” (Morris 2010) How can data scientists prevent their image databases and research from being similarly misunderstood and misrepresented? Recommendations The field of forensic evidence and especially DNA and visual evidence is evolving, and data scientists must conduct themselves today in a way to prevent the pitfalls of injustice now and in the future. Database standardisation is essential in terms of quality of images, compression and formats, plus the data dictionary used. Data Scientists must ensure that their work is statistically sound and agree a common methodology. They must search for opposing evidence, to avoid the trap of confirmation bias. They must form a close relationship with legal professionals to work in forensics. Informed consent must be gained from users to use their images in this way. To protect their privacy and justice, society must become more data literate as these issues are having a greater impact in every part of our lives, even in criminal justice. Bibliography boyd, danah & Crawford, K. 2012, ‘Critical Questions for Big Data’, Information, Communication & Society, vol. 15, no. 5, pp. 662–79. Chintala, S. 2015, The Eyescream Project: NeuralNets dreaming natural images, viewed 14 January 2018, <http://soumith.ch/eyescream/>. Facebook n.d., ‘Facebook Terms of service’, facebook.com, viewed 17 December 2017, <https://www.facebook.com/legal/terms>. Gibson, A.J. 2017, On the face of it: CCTV images, recognition evidence and criminal prosecutions in New South Wales, PhD Thesis. 8 Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. & Bengio, Y. 2014, ‘Generative Adversarial Networks’, arXiv:1406.2661 [cs, stat], viewed 14 January 2018, <http://arxiv.org/abs/1406.2661>. Goodman, B. & Flaxman, S. 2016, ‘European Union regulations on algorithmic decision-making and a ‘right to explanation’’, arXiv:1606.08813 [cs, stat], viewed 13 November 2017, <http://arxiv.org/abs/1606.08813>. Henderson, C., Blasi, S.G., Sobhani, F. & Izquierdo, E. 2015, ‘On the impurity of street-scene video footage’, IET Conference Proceedings; Stevenage, The Institution of Engineering & Technology, Stevenage, United Kingdom, Stevenage, viewed 21 January 2018, <https://search.proquest.com/docview/1776480046/abstract/3C556FDE82424A67PQ/7>. Hern, A. 2016, ‘Your battery status is being used to track you online’, The Guardian, 2 August, viewed 30 December 2017, <http://www.theguardian.com/technology/2016/aug/02/batterystatus-indicators-tracking-online>. Koch, G., Zemel, R. & Salakhutdinov, R. 2015, ‘Siamese neural networks for one-shot image recognition’, ICML Deep Learning Workshop. Krizhevsky, A., Sutskever, I. & Hinton, G.E. 2012, ‘Imagenet classification with deep convolutional neural networks’, Advances in neural information processing systems, pp. 1097–1105. LA Times, T.E. 2012, ‘Playing fast and loose with DNA’, Los Angeles Times, 31 July, viewed 13 January 2018, <http://articles.latimes.com/2012/jul/31/opinion/la-ed-dna-database-california- 20120731>. Lipton, Z.C., Berkowitz, J. & Elkan, C. 2015, ‘A Critical Review of Recurrent Neural Networks for Sequence Learning’, arXiv:1506.00019 [cs], viewed 5 November 2017, <http://arxiv.org/abs/1506.00019>. Mordvintsev, A., Olah, C. & Tyka, M. 2015, ‘Inceptionism: Going Deeper into Neural Networks’, Research Blog, viewed 17 December 2017, <https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html>. Morris, E.K. 2010, Statistical probabilities in a forensic context: How do jurors weigh the likelihood of coincidence?, Ph.D., University of California, Irvine, United States — California, viewed 13 January 2018, <https://search.proquest.com/docview/755686007/abstract/7A00420D28404DF2PQ/2>. Murphy, E.. 2015, Inside the cell: the dark side of forensic DNA, First., Nation Books, New York, NY, USA. New South Wales, Law Reform Commission, New South Wales & Law Reform Commission 2001, Surveillance: an interim report, New South Wales Law Reform Commission, Sydney. OfficerJoeK-9 n.d., ‘Joi’, Off-world: The Blade Runner Wiki, viewed 30 December 2017, <http://bladerunner.wikia.com/wiki/Joi>. Orlowski, A. n.d., ‘Cracking copyright law: How a simian selfie stunt could make a monkey out of Wikipedia’, The Register. 9 Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. 2015, ‘You Only Look Once: Unified, Real-Time Object Detection’, arXiv:1506.02640 [cs], viewed 14 January 2018, <http://arxiv.org/abs/1506.02640>. Sedenberg, E. & Hoffmann, A.L. 2016, ‘Recovering the History of Informed Consent for Data Science and Internet Industry Research Ethics’, arXiv:1609.03266 [cs], viewed 17 December 2017, <http://arxiv.org/abs/1609.03266>. US Copyright Office n.d., Compenduim II of Copyright Office Practices, viewed 17 December 2017, <http://www.copyrightcompendium.com/>. Varior, R.R., Haloi, M. & Wang, G. 2016, ‘Gated Siamese Convolutional Neural Network Architecture for Human Re-Identification’, arXiv:1607.08378 [cs], viewed 13 January 2018, <http://arxiv.org/abs/1607.08378>. Wikipedia 2018, ‘Edward Snowden’, Wikipedia, viewed 13 January 2018, <https://en.wikipedia.org/w/index.php?title=Edward_Snowden&oldid=819863748>. Wikipedia 2017, ‘Personality rights’, Wikipedia, viewed 30 December 2017, <https://en.wikipedia.org/w/index.php?title=Personality_rights&oldid=814604845>.