Professional Ethics in contemporary Data Science practice

Executive Summary

This paper will discuss accountability, ethics and professionalism in data science (DS) practice, considering the demands and challenges practitioners face. Dramatic increases in the volume of data captured from people and things, and the ability to process it places Data Scientists in high demand. Business executives hold high hopes for the new and exciting opportunities DS can bring to their business, and hype and mysticism abounds. Meanwhile, the public are increasingly wary of trusting businesses with their personal data, and governments are implementing new regulation to protect public interests.  We ask whether some form of professional ethics can protect data scientists from unrealistic employer expectations and far reaching public accountabilities.

Demand for Data Science

Demand for DS skills is off the charts, as Data Scientists have the potential to unlock the promise of Big Data and Artificial Intelligence.

As much of our lives are conducted online, and everyday objects are connected to the internet, the “era of Big Data has begun.”(boyd & Crawford 2012). Advancements in computing power, and cheap cloud services mean that vast amounts of digital data are tracked, stored and shared for analysis (boyd & Crawford 2012), and there is a process of “datafication” as this analysis feeds back into people’s lives (Beer 2017).

Concurrently, Artificial Intelligence (AI) is gaining traction through successful use of statistical machine learning and deep learning neural networks for image recognition, natural language processing, and games and dialogue question and answer (Elish & boyd 2017).  AI now permeates every aspect of our lives in chatbots, robotics, search and recommendation services, automated voice assistants and self-driving cars.

Data is the new oil, and Google Amazon Facebook and Apple (GAFA) are in control of vast amounts of it. Combined with their network power, this results in super normal profits: US$25bn net profit amongst them in the first quarter of 2017 alone (the Economist 2017). Tesla, which made 20,000 self-driving cars in this time, is worth more than GM which sold 2.5m (the Economist 2017).

Furthermore, traditional industries such as government, education, healthcare, financial services, insurance, retailers, and functions such as accounting, marketing, commercial analysis and research who have long used statistical modelling and analysis in decision making are harnessing the power of Big Data and AI which supplements or replaces “complex decision support in professional settings (Elish & boyd 2017).

All these factors drive incredible demand from organisations, and results in a shortage of supply of Data Scientists.

Demand for Accountability

With this incredible appetite for and supply of personal data, individuals, government, and regulators are increasingly concerned about threats to competition (globally), personal privacy and discrimination, as DS, algorithms and big data are neither objective or neutral (Beer 2017) (Goodman & Flaxman 2016).  These must be understood as socio technical concepts (Elish & boyd 2017), and their limitations and shortcomings well understood and mitigated.

To begin with, the process of summarizing humans into zeros and ones removes context, therefore, contrary to popular mythology about Big Data, the larger the data set, the harder it is to know what you are measuring (Theresa Anderson n.d.; Elish & boyd 2017).  Rather, DS practitioner has to decide what is observed, recorded, included in the model, how the results are interpreted, and how to describe its limitations (Elish & boyd 2017; Theresa Anderson n.d.).

All too often, limitations in the data mean that cultural biases and unsound logics get reinforced and scaled by systems in which spectacle is prioritised over careful consideration”. (Elish & boyd 2017)

In addition, profiling is inherently  discriminatory, as algorithms sort, order, prioritise, and allocate resources in ways that can “create, maintain or cement norms and notions of abnormality” (Beer 2017) (Goodman & Flaxman 2016). Statistical machine learning scales normative logic (Elish & boyd 2017), and biased data in means biased data out, even if protected measures are excluded but correlated ones are included. Systems are not optimised to be unbiased, rather the objective is to have better average accuracy than the benchmark (Merity 2016).

Lastly, algorithms by their statistical nature are risk averse, and focus where they have a greater degree of confidence (Elish & boyd 2017; Theresa Anderson n.d.) (Goodman & Flaxman 2016), exacerbating the underrepresentation of minorities that exist in unbalanced training data (Merity 2016).

In response, the European Union announced an overhaul of their Data Protection regime from a Directive to the far reaching General Data Protection Regulation. Slated to be law by April 2018, this regulation protects the rights of individuals, including citizens right to be forgotten, and securely store their data, but also the right to an explanation of algorithmic decisions that significantly affect an individual (Goodman & Flaxman 2016). The regulations prohibit decisions made entirely by automated profiling and processing, and will impose significant fines for non-compliance.

Ethical Challenges and Opportunities for DS Practitioners

DS practitioners must overcome many challenges to meet these demands for accountability and profit. It all boils down to ethics. Data scientists must identify and weigh up the potential consequences of their actions for all stakeholders, and evaluate their possible courses of action against their view of ethics or right conduct (Floridi & Taddeo 2016).

Algorithms are machine learning, not magic (Merity 2016), but the media and senior executives seem to have blind faith, and regularly use “magic” and “AI” in the same sentence (Elish & boyd 2017).

In order to earn the trust of businesses and act ethically towards the public, practitioners must close the expectation gap generated by recent successful (but highly controlled) “experiments-as-performances”, by being very clear about the limitations of their DS practices. Otherwise DS will be snake oil, and collapse under the weight of the hype and these unmet expectations (Elish & boyd 2017), or breach regulatory requirements and lose public trust trying to meet them.

The accountability challenge is compounded in multi-agent, distributed global data supply chains, as accountability and control are hard to assign and assert (Leonelli 2016), the data may not be “cooked with care” but the provenance and assumptions within the data are unknown (Elish & boyd 2017; Theresa Anderson n.d.).

Furthermore, cutting edge DS is not a science in the traditional sense (Elish & boyd 2017), where hypotheses are stated and tested using scientific method. Often, it really is a black box (Winner 1993), where the workings of the machine are unknown, and hacks and short cuts are made to improve performance without really knowing why these work (Sutskever, Vinyals & Le 2014).

This makes the challenge of making the algorithmic process and results explainable to a human almost impossible in some networks (Beer 2017).

Lastly, the social and technical infrastructure grows quickly around algorithms once they are out in the wild. With algorithms powering self-driving cars and air traffic collision avoidance systems, ignoring the socio-technical context can have catastrophic results. The Überlingen crash in 2002 occurred because there was limited training on what controllers should do when they disagreed with the algorithm (Ally Batley 2017; Wikipedia n.d.). Data scientists have limited time  and influence to get the socio technical setting optimised before order and inertia sets in, but the good news is that the time is now, whilst the technology is new  (Winner 1980).

Indeed, the opportunities to use DS and AI for the betterment of society are vast. If data scientists embrace the uncertainty and the humanity in the data, they can make space for human creative intelligence, whilst at the same time respecting those who contributed the data, and hopefully create some real magic (Theresa Anderson n.d.).

Professions and Ethics

So how can DS practitioners equip themselves to take on these challenges and opportunities ethically?

Historically, many other professions have formed professional bodies to provide support outside of the influence of the professional’s employer. The members sign codes of ethics and professional conduct, in vocations as diverse as designers, doctors and accountants (The Academy of design professionals 2012; Australian Medical Association 2006; CAANZ n.d.).

Should DS practitioners follow this trend?

What is a profession?

“A profession is a disciplined group of individuals who adhere to ethical standards and who hold themselves out as, and are accepted by the public as possessing special knowledge and skills in a widely recognised body of learning derived from research, education and training at a high level, and who are prepared to apply this knowledge and exercise these skills in the interest of others. It is inherent in the definition of a profession that a code of ethics governs the activities of each profession.“ (Professions Australia n.d.)

The central component in every definition of a profession is ethics and altruism (Professions Australia n.d.), therefore it is worth exploring professional membership further as a tool for data science practitioners.

Current state of DS compared to accounting profession

Let us compare where the nascent DS practice is today with the chartered accountant (CA) profession. The first CA membership body was formed in 1854 in Scotland (Wikipedia 2017a), long after double entry accounting was invented in the 13th century (Wikipedia 2017b).  Modern data science began in the mid twentieth century (Foote 2016), and there is as yet no professional membership body.

Current CA membership growth rate is unknown, but DS practitioner growth is impressive. In 2016, there were 2.1M licensed chartered accountants[1] (Codd 2017). IBM predicts there will be 2.7M data scientists by 2020 (Columbus n.d.; IBM Analytics 2017), predicting 15% growth annually.

The standard of education is very high in both professions, but for different reasons. Chartered Accountants have strenuous post graduate exams to apply for membership, and requirements for continuing professional education (CAANZ n.d.).

DS entry levels are high too, but enforced by competitive forces only. Right now, 39% of DS job openings require a Masters or Ph.D (IBM Analytics 2017), but this may change over time as more and more data scientists are educated outside of universities.

The CA code of ethics is very stringent, requiring high standards of ethical behaviour and outlining rules, and membership can be revoked if the rules are broken (CAANZ n.d.) CAs must treat each other respectfully, and act ethically and in accordance with the code towards their clients and the public.

Lastly, like accounting, DS is all about numbers, and seems like a quantitative and objective science. Yet there is compelling research to indicate both are more like social sciences, and benefit from being reflexive in their research practices (boyd & Crawford 2012; Elish & boyd 2017; Chua 1986, 1988; Gaffikin 2011).   Also like accountants (Gallhofer, Haslam & Yonekura 2013), DS practitioners could suffer criticism for being long on practice and short on theory.

Therefore, DS should look hard at the experience of accountants and determine if, and when becoming a profession might work for them.

For and Against DS becoming a profession

It is conceivable that individually, DS practitioners could be ethical in their conduct, without the large cost in time and money of professional membership.

Data scientists are very open about their techniques, code and results accuracy, and welcome suggestions and feedback. They use open source software packages, share their code on sites like GitHub and BitBucket, contribute answers on Stack Overflow, blog about their learnings and present and attend Meet Ups.  It’s all very collegiate, and competitive forces drive continuous improvement.

But despite all this online activity, it is not clear whether they behave ethically. They do not readily share data as it is often proprietary and confidential, nor do they share the substantive results and interpretation. This means it is difficult to peer review or reproduce their results, and be transparent about their DS practices to ascertain if they are ethical or not.

A professional body may seem like a lot of obligations and rules, but by proclaiming their ethical stance, it could provide the data scientists some protection and more access to data.

From the public’s point of view, a profession is meant to be an indicator of trust and expertise (Professional Standards Councils n.d.). Unlike other professions, the public would rarely directly employ the services of a data scientist, but they do give consent for data scientists to collect their data (“oil”).

Becoming a profession could earn public trust and personal data (Accenture n.d.). It can also help pool resources and allow them to pursue initiatives that are altruistic and socially preferable (Floridi & Taddeo 2016), and actually makes for good leaders who can navigate conflict and ambiguity (Accenture n.d.), and result in good financial results (Kiel 2015).

With the growing regulatory focus on data and data security, it is foreseeable soon that Chief Data Officer and Chief Information Security Officer may be subject to individual fines and jail time penalties like Chief Executive and Chief Financial Officers are with regards to Sarbanes Oxley Act Compliance (Wikipedia 2017c). Professional membership can provide the training and support needed to keep practitioners up to date, in compliance and out of jail.

Lastly, right now, the demand for DS skills far outweigh supply. Therefore, despite the significant concentration in DS employers (in GAFA), the bargaining power of some individual data scientists is relatively high. However, they have no real influence over how their work is used: their only option in a disagreement is to resign.  Over the medium term, supply will catch up with demand, and then even the threat of resignation will become worthless.

In summary

Steering the course of DS practice towards ethical outcomes is easiest at the outset (Winner 1980), however it is highly unlikely DS practitioners will stand up to their employers and voluntarily band together to create a professional membership body in the immediate future.

Professional ethics can protect data scientists from unrealistic employer expectations and far reaching public accountabilities, but the organisational effort may come too late.

Regulatory pressure that counters the power of GAFA may create the force for change, but more likely professional indemnity insurers and legal liability cases will eventually force sole traders and small to medium businesses to band together as a professional body to shoulder the responsibility of public accountability and earn the right to their data.

Bibliography

Accenture n.d., ‘Data Ethics Point of view’, http://www.accenture.com, viewed 12 November 2017, <https://www.accenture.com/t00010101T000000Z__w__/au-en/_acnmedia/PDF-22/Accenture-Data-Ethics-POV-WEB.pdf#zoom=50&gt;.

Ally Batley 2017, Air Crash Investigation – DHL Mid Air COLLISION – Crash in Überlingen, viewed 20 November 2017, <https://www.youtube.com/watch?v=yQ0yBFoO2V4&gt;.

Australian Medical Association 2006, ‘AMA Code of Ethics – 2004. Editorially Revised 2006’, Australian Medical Association, viewed 20 November 2017, <https://ama.com.au/tas/ama-code-ethics-2004-editorially-revised-2006&gt;.

Beer, D. 2017, ‘The social power of algorithms’, Information, Communication & Society, vol. 20, no. 1, pp. 1–13.

boyd,  danah & Crawford, K. 2012, ‘Critical Questions for Big Data’, Information, Communication & Society, vol. 15, no. 5, pp. 662–79.

CAANZ n.d., ‘Codes and Standards | Member Obligations’, CAANZ, Text, viewed 20 November 2017, <http://www.charteredaccountantsanz.com/member-services/member-obligations/codes-and-standards&gt;.

Chua, W.F. 1988, ‘Interpretive Sociology and Management Accounting Research- a critical review’, Accounting, Auditing and Accountability Journal, vol. 1, no. 2, pp. 59–79.

Chua, W.F. 1986, ‘Radical Developments in Accounting Thought’, The Accounting Review, vol. LXI, no. 4, pp. 601–33.

Codd, A. 2017, ‘How many Chartered accountants are in the world?’, quora.com, viewed 20 November 2017, <https://www.quora.com/How-many-Chartered-accountants-are-in-the-world&gt;.

Columbus, L. n.d., ‘IBM Predicts Demand For Data Scientists Will Soar 28% By 2020’, Forbes, viewed 20 November 2017, <https://www.forbes.com/sites/louiscolumbus/2017/05/13/ibm-predicts-demand-for-data-scientists-will-soar-28-by-2020/&gt;.

Data Science Association n.d., ‘Data Science Association Code of Conduct’, Data Science Association, viewed 13 November 2017, </code-of-conduct.html>.

Elish, M.C. & boyd,  danah 2017, Situating Methods in the Magic of Big Data and Artificial Intelligence, SSRN Scholarly Paper, Social Science Research Network, Rochester, NY, viewed 19 November 2017, <https://papers.ssrn.com/abstract=3040201&gt;.

Floridi, L. & Taddeo, M. 2016, ‘What is data ethics?’, Phi.Trans.R.Soc.A, no. 374:20160360.

Foote, K.. 2016, ‘A Brief History of Data Science’, DATAVERSITY, viewed 21 November 2017, <http://www.dataversity.net/brief-history-data-science/&gt;.

Gaffikin, M. 2011, ‘What is (Accounting) history?’, Accounting History, vol. 16, no. 3, pp. 235–51.

Gallhofer, S., Haslam, J. & Yonekura, A. 2013, ‘Further critical reflections on a contribution to the methodological issues debate in accounting’, Critical Perspectives on Accounting, vol. 24, no. 3, pp. 191–206.

Goodman, B. & Flaxman, S. 2016, ‘European Union regulations on algorithmic decision-making and a ‘right to explanation’’, arXiv:1606.08813 [cs, stat], viewed 13 November 2017, <http://arxiv.org/abs/1606.08813&gt;.

IBM Analytics 2017, ‘The Quant Crunch’, IBM, viewed 20 November 2017, <https://www.ibm.com/analytics/us/en/technology/data-science/quant-crunch.html&gt;.

Kiel, F. 2015, ‘Measuring the Return on Character’, Harvard Business Review, viewed 13 November 2017, <https://hbr.org/2015/04/measuring-the-return-on-character&gt;.

Leonelli, S. 2016, ‘Locating ethics in data science: responsibility and accountability in global and distributed knowledge production systems’, Phil. Trans. R. Soc. A, vol. 374, no. 2083, p. 20160122.

Merity, S. 2016, ‘It’s ML, not magic: machine learning can be prejudiced’, Smerity.com, viewed 19 November 2017, <https://smerity.com/articles/2016/algorithms_can_be_prejudiced.html&gt;.

Professional Standards Councils n.d., What is a profession? | Professional Standards Councils, viewed 19 November 2017, <https://www.psc.gov.au/what-is-a-profession&gt;.

Professions Australia n.d., What is a profession?, viewed 21 November 2017, <http://www.professions.com.au/about-us/what-is-a-professional&gt;.

Sutskever, I., Vinyals, O. & Le, Q.V. 2014, ‘Sequence to Sequence Learning with Neural Networks’, arXiv:1409.3215 [cs], viewed 4 November 2017, <http://arxiv.org/abs/1409.3215&gt;.

The Academy of design professionals 2012, ‘The Academy of Design Professionals – Code of Professional Conduct’, designproacademy.org, viewed 13 November 2017, <http://designproacademy.org/code-of-professional-conduct.html&gt;.

the Economist 2017, ‘The world’s most valuable resource is no longer oil, but data’, The Economist, 6 May, viewed 19 November 2017, <https://www.economist.com/news/leaders/21721656-data-economy-demands-new-approach-antitrust-rules-worlds-most-valuable-resource&gt;.

Theresa Anderson n.d., Managing the Unimaginable, viewed 19 November 2017, <https://www.youtube.com/watch?v=YEPPW09qpfQ&feature=youtu.be&gt;.

Wikipedia 2017a, ‘Chartered accountant’, Wikipedia, viewed 21 November 2017, <https://en.wikipedia.org/w/index.php?title=Chartered_accountant&oldid=810642744&gt;.

Wikipedia 2017b, ‘History of accounting’, Wikipedia, viewed 21 November 2017, <https://en.wikipedia.org/w/index.php?title=History_of_accounting&oldid=810643659&gt;.

Wikipedia 2017c, ‘Sarbanes–Oxley Act’, Wikipedia, viewed 21 November 2017, <https://en.wikipedia.org/w/index.php?title=Sarbanes%E2%80%93Oxley_Act&oldid=808445664&gt;.

Wikipedia n.d., Überlingen mid-air collision – Wikipedia, viewed 20 November 2017, <https://en.wikipedia.org/wiki/%C3%9Cberlingen_mid-air_collision&gt;.

Winner, L. 1980, ‘Do Artifacts Have Politics?’, Daedalus, vol. 109, no. 1, pp. 121–36.

Winner, L. 1993, ‘Upon Opening the Black Box and Finding It Empty: Social Constructivism and the Philosophy of Technology’, Science, Technology, & Human Values, vol. 18, no. 3, pp. 362–78.

[1] not including unlicensed practitioners such as bookkeepers, or Certified Practicing Accountants

Anthropomorphising the algorithm

Leading on from my last blog post conclusion that holding algorithms accountable is a bit of a daft idea, I want to thank Richard Nota for this wonderful comment on The Conversation article posted by Andrew Waites in our slack channel

Richard Nota

The ethics is about the people that oversee the design and programming of the algorithms.

Machine learning algorithms work blindly towards the mathematical objective set by their designers. It is vital that this task include the need to behave ethically.

A good start would be for people to stop anthropomorphising robots and artificial intelligence.

Anthropomorphising…. I had to google to see if that is even a word (it is).
But that is exactly what I believe needs to happen, stop anthropomorphising algorithms.
As Theresa puts it, they are part of the infrastructure, and once let loose into the wild, they can be made extremely inflexible if they are not created with care and managed appropriately.
Its up to the humans to manage the ethical implications of the algorithms in their systems.
Anyway, the article was written by Lachlan McCalman who works at Data61 and he makes some very good arguments.
He points out that making the smallest mistake possible does not mean NO mistakes.
Lachlan describes 4 errors and how the algorithm can be designed to adjust for these.
1. Different people, different mistakes
There actually can be quite large mistakes for different subgroups that offset each other.  In particular for minorities, where because there are few examples, getting their predictions wrong doesnt penalise the results too much.
I know about this already due to my favourites Jeff Larsson at ProPublica and the offsetting errors in the recidivism prediction algorithm in False negatives and positives for white and black males. Im sure you can work out who was the false negative (incorrectly predicted will not reoffend)  vs false positive (incorrectly predicted will re-offend).
Lachlan suggests to fix this, the algorithm would need to be changed to care equally about accuracy for the sub groups.
2. The algorithm isn’t sure
Of course, its just a guess, and there are varying degrees of uncertainty.
Lachlan suggests the algorithm could allow for giving the benefit of the doubt where there is uncertainty.
3. Historical bias
this one is huge. of course patterns of bias become entrenched if the algorithm is fed biased history.
So changing the algorithm (positive discrimination perhaps) to counter this bias would be required.
4. Conflicting priorities
trade offs need to be made when there are limited resources.
Judgement is required, with no simple answer here.
In conclusion, Lachlan proposes there needs to be an “ethics engineer” who explicitly obtains ethical requirements from stakeholders, converts them into a mathematical objective and then monitors the algorithms ability to meet that objective when in production.

About algorithms being black boxes

For 36111 Philosophies of Data Science Practices’ first assignment, I am exploring the emerging practice of holding algorithms accountable.

Often, people refer to algorithms as black boxes.

There are three different definitions of a black box, according to merriam webster:

Definition of black box

1 :a usually complicated electronic device whose internal mechanism is usually hidden from or mysterious to the user; broadly :anything that has mysterious or unknown internal functions or mechanisms
2 :a crashworthy device in aircraft for recording cockpit conversations and flight data
3 :a device in an automobile that records information (such as speed, temperature, or gasoline efficiency) which can be used to monitor vehicle performance or determine a cause in the event of an accident

Usually, when people refer to algorithms, they classify them as type 1 black box.  So what does that imply about how we interact with these black boxes? Its something mysterious that mungifies inputs and turns them into instructions you blindly follow?

If you treat algorithms like this, you may end up opening up a type 2 black box.

Let me explain what I mean with an example, courtesy of Air Crash Investigations tv series (see the episode perhaps illegally uploaded to YouTube here).

In 2002, two planes collided mid air over Überlingen in Germany , tragically killing everyone on board, mostly children. Afterwards, the devastated air traffic controller was murdered in his front garden by a father driven mad by grief who lost his  entire family in the crash (Wikipedia). Absolutely awful.

One of the contributing factors to this disaster was confusion in the human/computer interaction in the use of the Traffic Alert and Collision Avoidance System (TCAS) (see Kuchar and Drumm for how it works). TCAS is basically is a system of sensors and algorithms, that alert and advise pilots of what action to take to avoid collisions . In this incident, there was conflict between the instructions of TCAS and the air traffic controller. One pilot followed TCAS, the other air traffic control, so they both descended, ultimately ending in tragedy.

TCAS software itself did not fail, but as there was no international code on what to do in these circumstances, the overall system failed. The supporting infrastructure was not there.  The human computer interaction was not adequately considered and training. A previous incident in Japan (Wikipedia) had been reported to the International Civil Aviation Authority but no action had been taken. (If that crash had occurred, 677 people would have died, and it would’ve been the largest toll ever).

So my work is going to consider not just countering machine bias in the algorithm itself, but also considering the context in which it is used, and whether this is appropriate.

At the end of the day, holding an algorithm accountable is actually a ludicrous concept. It can only be the humans who are accountable.

On countering machine bias

ProPublica have a whole section dedicated to this topic. So glad to see this, and it appears they have covered insurance companies charging higher premiums in minority neighbourhoods, which I always suspected was happening. Cant wait to read that!

This is a topic for another blog post!

GANs, glorious GANs!

GANs, based on supervised learning and game theory, are just so darn elegant. The Grace Kelly of deep learning.

Pitting the generator and the discriminator against each other (where the generator tries to fool the discriminator into classifying its output as a real sample) is genius in its simplicity.

This report here gives a very good definition of them in Section 2, and creates a multi-task deep convolutional GAN to classify emotions from audio.

Or you can watch Ian Goodfellow describe his creation here.

Data Science Ethics: my initial thoughts

I had two main thoughts about this: self regulation by the data science profession, and data literacy.

The promise of big data and artificial intelligence is at an all time high, but by no means at its peak. The availability of data to mine is growing exponentially. And yet the data science community is still relatively small (compared with say, accountants, or bankers) and focused on scientific techniques .

Data science is making immense changes to the way people live, that will impact generations to come.

Reading these articles made me wonder, are data scientists proactively managing the ethical ramifications of the data they create, the algorithms they build, and the decisions made on the basis of their work?

This is a pivotal time in the evolution of data science ethics.

Data Scientists must establish strong ethical foundations in their profession, to ensure data science is used to make the world a better place, and before the profession gets over regulated by government if they dont do their part voluntarily.

As I explain in a past blog post, even Facebook is recognising that they are not just a technology tool, but make a real impact on the world: https://15-6762.ca.uts.edu.au/according-to-mark-zuckerberg-facebook-is-not-a-media-company/

Is now a good time for the profession to become a self regulating membership body?

Will auditors soon start to audit machine learning algorithms? (They should!)

I came across this code of conduct http://www.datascienceassn.org/code-of-conduct.html

Data literacy is also an interesting counterpoint to all of this.

I dont think it will be long before the general populace will revolt against organisations careless with their data, and opaque algorithms determining their fate in a way NOONE can explain.  People dont have blind faith anymore.

The University of Washington is now offering this course: “Calling bullshit”  to improve the quality of science.  http://callingbullshit.org/syllabus.html

In the mid nineties, I read Wild Swans, an autobiographical story about three generations of Chinese women (the last being the author Jung Chang) spanning about 100 years. If you want the abridged version, you can read it here in Wikipedia https://en.wikipedia.org/wiki/Wild_Swans.

After reading what they endured being on the losing side of a war, and then being under Communist rule, I’m certain those three daughters of China would warn us to guard our personal information closely, and watch how its being used against us. Random pieces of data given away here and there, could become information weapons in the wrong hands, and not just for us but for our descendants.

This is just one of the many sources of a general feeling of foreboding that I have about my personal data.

The other forces that make me think a slow train wreck is coming:

  • Ease of dissemination of “information” due to social media
  • Growing ease of storage
  • inability to destroy your own data, its immutable
  • diminishing interpretability of results

Below are some notes from the articles

privacy anonymity transparency trust and responsibility concern data collection curation analysis and use

What is data ethics? http://rsta.royalsocietypublishing.org/content/374/2083/20160360

Floridi and Taddeo talk about three axes of data science ethics

Data ethics concerns the generation recording curation processing dissemination sharing and use of the data

Data science ethics is what is done with the data ie the ethics of the algorithms and the ethics of the practices.

regarding the algorithms, auditing the outcomes against a gold standard is esssential, to ensure it is achieving  sensible and ethical results

creating a professional code of conduct to ensure ethical practices

3 Key Ethics Principles for Big Data and Data Science

Jay Taylor

collect minimal and aggregate

identify and scrub sensitive data

have a crisis management plan in place in case your insight backfires

above all, teach ethics!

Exploration of Tree Based Gradient Boosting Models to classify terrorism events as Suicide Attacks

Using Tree Based Gradient Boosting Models to classify terrorism events as Suicide Attacks

Tracy Keys

13 June 2017

Background

My team Gonzo at UTS used the Global Terrorism Database (GTD) to explore whether distinct features of terrorism events could predict the ABC’s online reaction to them. We did this through web scraping the ABC’s Twitter feed, and Google Search results, and then built generalized linear models, and ElasticNet regularization models.

Our research illustrated the dramatic increase in terrorism events in recent years, and as shown below (Figure 1), the absolute number and proportion of suicide attacks is also on the rise. Most of these attacks were through bombings or explosions (Figure 2). I wanted to explore these suicide attacks further, and identify what were the most important characteristics in the GTD or most influential factors in determining the classification of suicide attack. This paper represents my exploration and is definitely not perfect!

Figure 1 Terrorist Attacks during 2005-2015

Figure 2 Terrorist Attack Types during 2005-2015

My aims in the work discussed in this blog are, firstly, to deepen our teams understanding of how we can use the database itself, and secondly, to use a new statistical method, decision tree classification, (a gradient boosting model), to answer my new research question: how can Gradient Boosting Models classify terrorism events as Suicide Attacks?

Data Preparation

I had to change the data import by converting all the logical variables to factors for the gbm package, and make sure there were no NAs.

The package also has limitations in the number of levels a factor can have, so the research focused on the Middle East and North Africa, and South Asia regions in the GTD.

In addition, further filtering occurred on city to only those cities that had experienced a suicide attack, and this way I could keep my city and group name levels below 1024.

Gbm also takes binary outcome variables, so I translated my target “suicide” into “outcome_binary”.

After initial data exploration, and reading up on the GTD codebook, and getting extreme correlation in my outcome and variables, 3 variables were removed from the data: Weapsubtype1_txt=” Suicide…”, Nkillterr  and terrorist_killed

The gbm model

My data sets were split 70/30 into training and testing sets. My best cross validated gbm model is shown below:

gbm_fit = gbm(outcome_binary ~ ., distribution = “bernoulli”, data = training,cv.folds=10,
verbose = “CV”, n.trees = 100, interaction.depth = 3)

Model Evaluation

As this is a classification model with a binary outcome, I evaluated the model by calculating the Confusion matrix shown below.

ReferenceReference
Prediction Suicide= No Suicide = Yes
Suicide= No82526
Suicide = Yes1059620

Table 1 Confusion Matrix

Due to the high number of false positives (1059 of 1679), precision of the model is 37%, but accuracy is high (89%) due to the negative prediction values. Due to the sparsity of the response variable, this is a common result.   This can be shown graphically in Figure 3 ROC Chart.

The Area under the Curve (AUC) was 98.33%. This is really high (100% is perfect).  This is illustrated by the very small gap in the training and testing Gains Chart Receiver Operating Characteristic (ROC) curve in Figure 3.

I did have some faith in the result however as I had already removed the 3 variables that were high correlated with the suicide variable.

Figure 3 ROC Chart

Sensitivity score is 99%, and specificity 88.6%. I used our lecturer Stephan’s model evaluation code, but I have to say, something looks odd with the charts (Figure 4)

Figure 4 Sensitivity Specificity Chart

 

Model Findings

The model calculated the probability threshold for classification as suicide attack was 6.35%.  The gbm summary table explained that 3 variables were 100% of the relative influence: nperps, weapsubtype1_txt and city.

##                                     var   rel.inf
## nperps                           nperps 47.774687
## weapsubtype1_txt       weapsubtype1_txt 42.885061
## city                               city  9.340252

I analysed these variables further to see why they were so influential.

The majority of terrorist and in particular, suicide attacks were perpetrated by one attacker. This does not mean they were acting alone, but were the only person who carried out the attack in the vast majority of cases (Figure 5).

Figure 5 Suicide attacks by number of attackers (nperps)

Figure 6  Weapon (sub) type (weapsubtype1_txt)

Vehicles were used in the majority of suicide attacks (but not attacks overall) (Figure 6). Given the finding that most suicide attacks are bomb/explosive attacks (Figure 2), this finding makes sense.

Lastly, the third most influential variable, the city is illustrated in Figure 7. Bagdad has withstood the greatest amount of terrorist attacks over the ten year period, including suicide attacks. Bagdad has suffered many devastating car bomb suicide attacks in this time, killing hundreds of people.

Figure 7 Cities that withstood terrorist attacks (city)

Figure 8 Groups perpetrating terrorist attacks (gname)

The Islamic State (ISIL) have been the perpetrators of the majority of suicide attacks. Boko Haram, an active terrorist group that also perpetrates suicide attacks is not included as they operate in the sub-Saharan African region.

I can conclude that the 3 most important variables from my model stand up to scrutiny of further data analysis.

Recommendations for enhancing the model

I tried to get glmnet with ridge and lasso working, to deal with the sparseness of my response variable, however the model would run overnight and then fail. Getting this working would definitely improve the model.

Building elasticnet regularization model and conducting upsampling would also improve it.

Git it on…. maybe not!

I’ve been mildly anxious about my hard work being on my c drive, managing version control, tracking my thinking and learning, that kind of thing, and then last night, I got this error trying to open my work…

load(“~/R/lineartimeseries/DAM assignment 2 Pt A Q3 v1.R”)
Error: bad restore file magic number (file may be corrupted) — no data loaded
In addition: Warning message:
file ‘DAM assignment 2 Pt A Q3 v1.R’ has magic number ‘#####’
Use of save versions prior to 2 is deprecated

My anxiety escalated! But then I rebooted and everything was fine.

Following this brief moment of panic, Durand helpfully pointed out that this is where something like GitHub would be useful.

Ah, this course is teaching me so much 🙂

So I thought, I am going to follow this tutorial: http://product.hubspot.com/blog/git-and-github-tutorial-for-beginners

But then of course, in the usual style, we get sent off to another tutorial, http://mac.appstorm.net/how-to/utilities-how-to/how-to-use-terminal-the-basics/ which is then just for Mac.

Seriously.

So stay tuned for Part II but right now I am going back to writing my DAM assignment.

According to Mark Zuckerberg, Facebook is not a media company

According to Mark Zuckerberg, CEO , Facebook, the world’s largest social media platform[i] is not a media company[ii].

Zuckerberg explained in August 2016: “No, we are a tech company, not a media company…..We build the tools, we do not produce any content..”[iii]

One of those tools is the Facebook News Feed, which provides every one of the almost 2bn[iv] monthly active users a hyper- personalised news stream: “…an algorithmically generated and constantly refreshing summary of updates …”  [v] from friends and any other page a user follows, plus targeted ads and Page suggestions from Facebook.  There is also the Trending module on the right hand side of the Facebook user home page, which surfaces news stories and is entirely created by an algorithm[vi].

How Facebook News Feed works

The Facebook algorithm is complex but it essentially works by identifying key features of a post i.e. is it a video, who posted it, how often was it shared and by whom, and also uses natural language processing to identify the text, topics and sentiments within the post.

Then, in order to present relevant content to the specific user, Facebook analyses the past behaviour of the user and other users across hundreds of factors, then predicts the likelihood that the user will engage with this piece of content because they or people like them previously engaged with this content type and topic. This likelihood, combined with the age of the content and how popular it is across the network is its News Feed Rank score. Content is then selected and sorted so that the highest ranked content is first in the news feed, and then presented in descending order.

The Facebook algorithm is constantly being tweaked by Facebook through unsupervised machine learning, supplemented by the analysis of their team of data scientists, and qualitative feedback from dedicated user focus groups.[vii][viii]

Benefits of Facebook News Feed

Using unsupervised text analysis and machine learning algorithms to find and serve up content to the specific user has a lot of benefits, as such hyper-personalisation can be performed economically at scale, giving huge international reach for content creators, publishers, and interest groups.

Users are served up content that has a high probability of being from like-minded people, brands and groups, without having to search for it themselves (although that too is possible, utilising text analysis and search tools).

Brands and groups can quickly gain followers or reach a large audience if they know how to use the system, which is a great platform for brand awareness or for non-mainstream/minority causes to publish and broadcast their views.

In this regard, the Facebook News Feed provides the promise of freedom of speech and capitalist marketplace for its users, as does the internet as a whole:

“What is driving the Net is the promise of political efficacy, of the enhancement of democracy through citizens’ access and use of new communications technologies.”[ix]

Facebook as a technology company build the tools, and then content creators and publishers use the platform and the News Feed algorithm to find an audience for their content.  Facebook is the neutral, laissez faire “marketplace”, with community guidelines to prevent hate and crimes from being encouraged[x].

Downsides of Facebook News Feed

However, recent events have highlighted some of the flaws in the News Feed algorithm and the processes for dealing with errors in it. In the recent US election, it was uncovered that fake news sites were being promoted in peoples feeds to gain advertising revenue[xi]. The algorithm currently cannot identify legitimate news sites and satirical and/or fake sites. Facebook also have not developed their automated monitoring systems, or escalation workflows at the same rate as their automated products, and just this week a horrific video of a man murdering another man in cold blood remained on the site for 3 hours after it was initially reported[xii].

It is becoming increasingly difficult for Facebook to argue that it is not a media company, or that it does not have a responsibility to its users and the community for how its tools are used.

Facebook and its newsfeed algorithm are under pressure to assure the community that they are not proliferating fake news, manipulating their users emotions [xiii], promoting hate, discouraging respect or dialogue by seeing both sides of a debate[xiv], or broadcasting violent and terrible video and taking too long to remove it[xv].  Even more so, they are under pressure from their advertisers to ensure their brands are not placed next to such content. Some advertisers have recently pulled advertising from Google and Youtube and Facebook are very aware they could be next[xvi].

In addition, for Facebook’s users, the algorithm is not transparent and not able to be re-set or customised or trained by the user. Users can find it frustrating and feel like they are stuck in an echo chamber, where they are open to manipulation by Facebook, lobby groups or unscrupulous advertisers who know how to game the algorithm.

“What if people “like” posts that they don’t really like, or click on stories that turn out to be unsatisfying? The result could be a news feed that optimizes for virality, rather than quality—one that feeds users a steady diet of candy, leaving them dizzy and a little nauseated, liking things left and right but gradually growing to hate the whole silly game.” [xvii]

The Verdict

On balance, I think the benefits of the Facebook News Feed algorithm and natural language processing outweigh these costs. Facebook is still very much listening to their users and aware that there is intense competition for their attention, and therefore are constantly working to improve the algorithm and their products.

For example, in January 2017 Facebook made changes to the Trending module to only show trusted news sources[xviii], in April 2017 implemented a button to report possible fake news stories, and have established a user group to provide real human feedback on the algorithm.

Facebook recently announced a project with esteemed journalist Jeff Jarvis and CUNY to build  relationships and support credible journalism. [xix]

Even Mark Zuckerberg CEO of Facebook is changing his tune.  In December 2016 he said,

“Facebook is a new kind of platform. It’s not a traditional technology company…It’s not a traditional media company. You know, we build technology and we feel responsible for how it’s used.”[xx]

Which is just as well, because whilst he might not want to admit he is a media company, 2bn users a month use Facebook for their news, and if Facebook doesn’t act responsibly, legislators will eventually catch on that Facebook and social media is very much key to the worlds global media ecosystem.

End notes

[i] Wikipedia.com, Facebook. [ONLINE] Available at:  https://en.wikipedia.org/wiki/Facebook [Accessed 17 April 2017].

[ii] Reuters.com, Giulia Segreti. 2016. Facebook CEO says group will not become a media company. [ONLINE] Available at: http://www.reuters.com/article/us-facebook-zuckerberg-idUSKCN1141WN. [Accessed 17 April 2017].

[iii] Reuters.com, Giulia Segreti. 2016. Facebook CEO says group will not become a media company. [ONLINE] Available at: http://www.reuters.com/article/us-facebook-zuckerberg-idUSKCN1141WN. [Accessed 17 April 2017].

[iv] Wikipedia.com, Facebook. [ONLINE] Available at:  https://en.wikipedia.org/wiki/Facebook [Accessed 17 April 2017].

[v] Wikipedia.com, Timeline of Facebook. [ONLINE] Available at: https://en.wikipedia.org/wiki/Timeline_of_Facebook [Accessed 17 April 2017].

[vi] TheGuardian.com, Facebook fires trending topics team [ONLINE] available at: “https://www.theguardian.com/technology/2016/aug/29/facebook-fires-trending-topics-team-algorithm [Accessed 17 April 2017].

[vii] Slate.com, How Facebook’s news feed algorithm works [ONLINE] Available at http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html [Accessed 17 April 2017].

[viii] Techcrunch.com, Ultimate guide to the Facebook News Feed [ONLINE] Available at https://techcrunch.com/2016/09/06/ultimate-guide-to-the-news-feed/ [Accessed 17 April 2017].

[ix] Dean, Jodi (2005), “Communicative Capitalism: Circulation and the Foreclosure of Politics,” Cultural Politics 1(1): 62.

[x] Facebook, Controversial, Harmful and hateful speech on Facebook [ONLINE] Available at https://www.facebook.com/notes/facebook-safety/controversial-harmful-and-hateful-speech-on-facebook/574430655911054/ [Accessed 17 April 2017].

[xi] Forbes.com How Facebook helped Donald Trump become president [ONLINE] Available at https://www.forbes.com/sites/parmyolson/2016/11/09/how-facebook-helped-donald-trump-become-president/#3a548ab759c5[Accessed 17 April 2017].

[xii] Theaustralian.com.au, 2017. Murder video forecasts scrutiny at Facebook [ONLINE] Available at http://www.theaustralian.com.au/business/wall-street-journal/murder-video-forces-scrutiny-at-facebook/news-story/79aa1b6e6acf9dce738062f226c422a6 [Accessed 20 April 2017]

[xiii] Theguardian.com Facebook reveals news feed experiment to contol emotions [ONLINE] Available at https://www.theguardian.com/technology/2014/jun/29/facebook-users-emotions-news-feeds [Accessed 17 April 2017].

[xiv] Financial Times, Facebook and the manufacture of consent [ONLINE] Available at  https://ftalphaville.ft.com/2016/11/16/2179807/facebook-and-the-manufacture-of-consent/

[Accessed 17 April 2017].

[xv] Theaustralian.com.au, 2017. Murder video forecasts scrutiny at Facebook [ONLINE] Available at http://www.theaustralian.com.au/business/wall-street-journal/murder-video-forces-scrutiny-at-facebook/news-story/79aa1b6e6acf9dce738062f226c422a6 [Accessed 20 April 2017]

[xvi] TheGuardian.com Google pledges more control for brands over ad placement [ONLINE] Available at https://www.theguardian.com/media/2017/mar/17/google-pledges-more-control-for-brands-over-ad-placement [Accessed 17 April 2017].

[xvii] Slate.com How Facebook’s news feed algorithm works [ONLINE] Available at http://www.slate.com/articles/technology/cover_story/2016/01/how_facebook_s_news_feed_algorithm_works.html [Accessed 17 April 2017].

[xviii] RT.com Facebook fake news trending algorithm [ONLINE] Available at https://www.rt.com/viral/375121-facebook-fake-news-trending-algorithm/ [Accessed 17 April 2017].

[xix] UsaToday.com  Facebook Friends media journalism project [ONLINE] Available at https://www.usatoday.com/story/tech/news/2017/01/11/facebook-friends-media-journalism-project/96428460/ [Accessed 17 April 2017].

[xx] Techcrunch.com, Josh Constine, Zuckerberg implies Facebook is a media company, just not a traditional media company [ONLINE] Available at https://techcrunch.com/2016/12/21/fbonc/ [Accessed 17 April 2017].

Using topicmodels package for analysis of topics in texts

My vignette is about text mining and analysis, utilising the tm and topicmodels packages in R and Latent Dirichlet Allocation, to work out what the documents are written about without having to read them all!

The vignette shows you how to create a Document-Term Matrix, then uses LDA to work out what key themes are present in a body of documents (called a corpus) and assigns each document to the topics, with varying probabilities for each topic.

This tool can help a user find a relevant document without having to search for it by name, or even knowing what it was written about!

Anyway, here is the link to my vignette:

http://rpubs.com/benjibex/266565

I hope you find it useful.

Tracy