
TL:DR
How three forces: the explosion of individual images available online, the
accelerating data science capabilities of image processing, and pressure on individual rights and freedoms impact the use of image recognition in surveillance in crime prevention and criminal prosecution. Covers the potential risks of reliance on this kind of visual evidence, and recommendations to reduce these risks to society.
We are living in an “Age of Surveillance”
Surveillance is an age-old tool of crime prevention, and through the analysis of video and still
images, provides the basis for prosecution in some cases today for individual and national security
crimes.
Despite strong lobbying against it, general surveillance by government and corporations has seen an
unprecedented increase in recent years (New South Wales et al. 2001). This surveillance occurs at
your work place, on the street, in public venues, in supermarkets, at the airport, but also through
analysis of what you post publicly on the internet through social media.
The ability to conduct surveillance effectively is driven by three forces: the explosion in images
available in databases, the image processing capability of data science and the erosion of individual
rights.
Image Databases are growing exponentially
The number of databases with videos and images of people is growing exponentially.
Firstly, due to the increased use of CCTV for general surveillance.
CCTV has been around since the 1960s, but it has outgrown being closed circuit and on a television,
and is now any “monitoring system that uses video cameras .. aimed at preventing and detecting
crime through general (not targeted) surveillance. “ (Gibson 2017). Government at all levels use
CCTV to deter and detect crime, and its not just fixed cameras but also cameras attached to the
bodies of law enforcement agents.
Whilst surveillance is an unpleasant fact, many corporations and public-sector organisations gather
data on individuals for other purposes, such as marketing, customer service, problem solving, and
product development. Individuals often willing consent to the collection of this data, in return for
their services. However many individuals do not understand the terms and conditions they are
agreeing to when providing their consent (Sedenberg & Hoffmann 2016).
Indeed, as our lives are increasingly conducted online, and cloud computing makes storage cheaper,
and faster, our activities are tracked, recorded and stored by corporations and governments (Hern
2016; boyd & Crawford 2012; Sedenberg & Hoffmann 2016).
As a result of general surveillance and the voluntary provision of images and video over social media,
your image is now stored in databases online by governments and corporates.
Image Processing capability is growing rapidly also
The capability to analyse all these images has made great progress in recent years also, making it
possible for machines to process of petabytes of surveillance images to identify individuals.
4
Over the last five years, using deep learning convolutional neural networks (ConvNets), image
processing capabilities have progressed from image classification tasks (Krizhevsky, Sutskever &
Hinton 2012) using large image databases like ImageNet, to human re-identification using Siamese
Neural Networks and contrastive difference to be able to accurately recognise faces they have only
seen once before, and in real time (Koch, Zemel & Salakhutdinov 2015; Varior, Haloi & Wang 2016).
The YOLO object identification and classification network ( You Only Look Once) are achieving fast
processing speeds in real time and competitive accuracy (Redmon et al. 2015).
Recurrent neural networks such as long short term memory networks have also proved able to
identify objects in video sequences and caption them (Lipton, Berkowitz & Elkan 2015), however this
is not in real time.
In 2013, Ian Goodfellow developed generative adversarial networks (GANs), where two ConvNets
are trained simultaneously, one to generate artificially created images, and the other to discriminate
between real images and generated ones (Goodfellow et al. 2014).
And in the last two years, both Google and Facetime Artificial Intelligence teams have independently
developed the ability to create images using ConvNets (Mordvintsev, Olah & Tyka 2015; Chintala
2015).
Lastly, the processing power available to data scientists is growing rapidly, through advancements in
graphic processing unit (GPU) speed and the availability of cloud computing, enabling analysis of
extremely large data sets without huge investment in compute power.
The speed of development is incredibly fast in this deep learning field, and it is very conceivable that
products will be developed in the next 10 years that could productionise and scale these automated
image recognition and generation capabilities for use by corporations, government and law
enforcement for use in surveillance for crime prevention, detection and prosecution.
The ready availability of image databases, and the advancements in data science image processing
capability is not enough without the right of corporations and governments to use this data for
general (not targeted) surveillance). This third force is also increasingly becoming a reality in recent
years.
Erosion of Individual Rights
There are several ways our rights are being eroded.
Individual rights to privacy are being eroded voluntarily, as we give away licenses to our own images,
and involuntarily through legislation or court decisions enacting crime prevention and national
security measures.
More images of our daily life are captured through our phones and posted to social media.
Technically, you own these images and can control their usage (Wikipedia 2017) (US Copyright Office
n.d.; Orlowski n.d.).
However, while you own the copyright of the images you have created, you have probably already
given Facebook and Amazon permission to profit from your image and images you own, through a
very wide-ranging license to store and use it (Facebook n.d.).
Private organisations are using the data gathered on their users for research, however these
organisations are outside of the ethics required by government on education and health institutions
5
(Sedenberg & Hoffmann 2016). The profit motive of these companies could undermine privacy and
security of your data (Sedenberg & Hoffmann 2016).
On the personal data level, there are some serious attempts at protecting the rights of the
individual. The General Data Protection Regulation of the European Union which comes into effect
April 2018, covers all data captured from EU citizens. It codifies the “right to be forgotten”, and “the
right to an explanation” for the result of any algorithms (Goodman & Flaxman 2016). However,
these regulations do not seem to matter when it comes to national security.
However, Edward Snowden and Wikileaks revealed that organisations like Yahoo and Google have
been compelled in the United States courts and in Europe to hand over your data to government
bodies for national security surveillance (Wikipedia 2018). It is quite feasible that Apple, Facebook
and Amazon have the same obligations, and we just don’t know about it yet.
The use of video cameras for general surveillance erodes an individual’s right to privacy, which
although reduced in public, is still expected to some degree due to people’s perception of the “veil
of anonymity” (Gibson 2017). It also indirectly erodes freedom of speech, as people are unable to
express themselves without fear of reprisal (Gibson 2017).
People often say they have nothing to hide when it comes to fighting against general surveillance,
but this is predicated on society and government keeping the same values of today into the future.
Once something is recorded online, either in image or text, it is there forever and could be used
against you. This is something people from totalitarian regimes would be able to tell Westerners.
Having online databases of images and advanced processing power combined with the erosion of
individual right to privacy make the perfect conditions for an explosion in the use of image
processing in criminal prevention, detection and prosecution. The next section focuses on the
current and future use of image processing as a form of visual evidence in criminal prosecution.
Uses of Image Processing in Criminal Prosecution
Video and images are a form of visual evidence, whose purpose is to provide positive visual
identification evidence (i.e it is the same person) , circumstantial identification evidence (i.e it is a
similar person) or recognition evidence (I know that it is the same person in the image) that supports
the case to prove that the accused is the offender (Gibson 2017).
Computer image processing provides visual evidence in a number of ways. Firstly, its sheer
processing power enables a very wide and deep search for this evidence within image databases or
millions of hours of video.
It also has useful capabilities in gathering video evidence. It can detect individuals across a range of
different surveillance cameras as the offender moves through the landscape. Algorithms can be used
to “sharpen” blurry images. YOLO image recognition can enable a person’s face to be found in a
huge database of images using neural network architecture.
Variable lighting, recording quality, movement of the camera, obstructions to line of sight, and other
factors make for many interpretations of an image (Henderson et al. 2015). For this reason, an
expert in “facial mapping” or “body mapping” usually examines the image and testifies in the court
room, where they can be cross examined (Gibson 2017). The expert may not positively identify the
defendant, so at other times, it is up to the juror to determine if the offender and the defendant are
the same.
6
In future, as the database of images grow and the capability to use computer vision processing
accelerates, I can imagine a huge facial image database similar to the DNA database collated in the
USA in states like California (LA Times 2012), where instead of DNA samples, CCTV video images
from a cold case will be matched to the database in order to track down a suspect.
However, unlike DNA, where few people have their DNA recorded in the database, we are moving
towards the entire population’s faces being recorded online somewhere, and most likely one day in
the hands of law enforcement.
What can we learn about the risks of the use of DNA forensic evidence and CCTV evidence to be sure
that visual evidence procured through image processing will not create false positives and injustice?
Limitations of Visual Evidence in Criminal Prosecution
We begin by understanding the limitations of visual evidence for the jurors who must evaluate it in
criminal trials.
Video is a constructed medium, which can be interpreted in more than one, and even opposing,
ways in the court room. After the lawyers for the 4 police officers accused of beating Rodney King
deconstructed the eye witness video, 3 of the 4 were acquitted, yet public outcry was so intense that
it led to the LA Riots (Gibson 2017).
Unlike witnesses, video and images cannot be cross examined, however they are efficiently
absorbed by the jury compared to witnesses who may be boring or too technical (Gibson 2017).
When evidence is presented by an expert, jurors can suffer from the “white coat effect” which
prejudices the juror to weight the experts evidence more heavily (Gibson 2017).
Therefore, visual evidence is fraught with a lot of the issues that face forensic evidence more
broadly, including DNA evidence.
In the USA, since 1994 the FBI have been using the Combined DNA Index System (CODIS): a
computer program that enables the comparison of DNA profiles in databases at the local, state, and
national level (Morris 2010). Recently, CODIS has been used to search for suspects using DNA
matches on cold cases, and a growing proportion of criminal cases are relying on these cold DNA
database hits.
Worryingly, there have been many examples of a miscarriage of justice, where match statistics were
wildly wrong, yet heavily overweighted by the jury despite the accused having no means, motive or
opportunity (Murphy 2015).
We must explore the limitations of DNA evidence to understand what limitations there could be if
image searches were used like this in the future.
Like visual evidence, jurors must evaluate DNA evidence in criminal trials. DNA evidence is
accompanied by random match probability (RMP) statistics: the likelihood of finding a DNA match by
chance.
There are many differences between the databases in CODIS: the collection process, accuracy of
samples, the criteria for inclusion in the database and the statistical methods and programs used for
analysis. (Morris 2010). These differences can lead to very different impacts on match statistics.
Research has shown that a juror’s interpretation of the likelihood of a coincidental match also
depends on how these statistics are presented (Morris 2010). The statistics are complicated, but
7
seemingly rare events can have surprisingly high likelihood if you present the probability of
someone, somewhere matching, rather than the odds of a certain person matching. For example,
the chance of any two people in a room having the same birth day and month is greater than 50% if
there are more than 22 people in the room. This represents the database match probability. When
the Arizona DNA database was searched for intra-database record to record matches they found
multiple occurrences of the same DNA profile from different people.
The wider the search, the greater the likelihood of a coincidental match, and Type I errors (false
positives). Therefore, coincidental matches would be much more likely in a national or even global
database of faces. Databases such as CODIS also suffer from ascertainment bias, due to their nonrandom sampling.
There are currently 4 different ways of presenting these match statistics (3 of them court approved)
with research finding widely different outcomes in terms of verdict (Morris 2010). Jurors fall prey to
the prosecutors fallacy “drawing the inappropriate conclusion that a particular probability of chance
occurrence is the same as the likelihood that the person incriminated by the statistics is innocent of
the crime.” (Morris 2010)
How can data scientists prevent their image databases and research from being similarly
misunderstood and misrepresented?
Recommendations
The field of forensic evidence and especially DNA and visual evidence is evolving, and data scientists
must conduct themselves today in a way to prevent the pitfalls of injustice now and in the future.
Database standardisation is essential in terms of quality of images, compression and formats, plus
the data dictionary used.
Data Scientists must ensure that their work is statistically sound and agree a common methodology.
They must search for opposing evidence, to avoid the trap of confirmation bias. They must form a
close relationship with legal professionals to work in forensics.
Informed consent must be gained from users to use their images in this way. To protect their privacy
and justice, society must become more data literate as these issues are having a greater impact in
every part of our lives, even in criminal justice.
Bibliography
boyd, danah & Crawford, K. 2012, ‘Critical Questions for Big Data’, Information, Communication &
Society, vol. 15, no. 5, pp. 662–79.
Chintala, S. 2015, The Eyescream Project: NeuralNets dreaming natural images, viewed 14 January
2018, <http://soumith.ch/eyescream/>.
Facebook n.d., ‘Facebook Terms of service’, facebook.com, viewed 17 December 2017,
<https://www.facebook.com/legal/terms>.
Gibson, A.J. 2017, On the face of it: CCTV images, recognition evidence and criminal prosecutions in
New South Wales, PhD Thesis.
8
Goodfellow, I.J., Pouget-Abadie, J., Mirza, M., Xu, B., Warde-Farley, D., Ozair, S., Courville, A. &
Bengio, Y. 2014, ‘Generative Adversarial Networks’, arXiv:1406.2661 [cs, stat], viewed 14
January 2018, <http://arxiv.org/abs/1406.2661>.
Goodman, B. & Flaxman, S. 2016, ‘European Union regulations on algorithmic decision-making and a
‘right to explanation’’, arXiv:1606.08813 [cs, stat], viewed 13 November 2017,
<http://arxiv.org/abs/1606.08813>.
Henderson, C., Blasi, S.G., Sobhani, F. & Izquierdo, E. 2015, ‘On the impurity of street-scene video
footage’, IET Conference Proceedings; Stevenage, The Institution of Engineering &
Technology, Stevenage, United Kingdom, Stevenage, viewed 21 January 2018,
<https://search.proquest.com/docview/1776480046/abstract/3C556FDE82424A67PQ/7>.
Hern, A. 2016, ‘Your battery status is being used to track you online’, The Guardian, 2 August, viewed
30 December 2017, <http://www.theguardian.com/technology/2016/aug/02/batterystatus-indicators-tracking-online>.
Koch, G., Zemel, R. & Salakhutdinov, R. 2015, ‘Siamese neural networks for one-shot image
recognition’, ICML Deep Learning Workshop.
Krizhevsky, A., Sutskever, I. & Hinton, G.E. 2012, ‘Imagenet classification with deep convolutional
neural networks’, Advances in neural information processing systems, pp. 1097–1105.
LA Times, T.E. 2012, ‘Playing fast and loose with DNA’, Los Angeles Times, 31 July, viewed 13 January
2018, <http://articles.latimes.com/2012/jul/31/opinion/la-ed-dna-database-california-
20120731>.
Lipton, Z.C., Berkowitz, J. & Elkan, C. 2015, ‘A Critical Review of Recurrent Neural Networks for
Sequence Learning’, arXiv:1506.00019 [cs], viewed 5 November 2017,
<http://arxiv.org/abs/1506.00019>.
Mordvintsev, A., Olah, C. & Tyka, M. 2015, ‘Inceptionism: Going Deeper into Neural Networks’,
Research Blog, viewed 17 December 2017,
<https://research.googleblog.com/2015/06/inceptionism-going-deeper-into-neural.html>.
Morris, E.K. 2010, Statistical probabilities in a forensic context: How do jurors weigh the likelihood of
coincidence?, Ph.D., University of California, Irvine, United States — California, viewed 13
January 2018,
<https://search.proquest.com/docview/755686007/abstract/7A00420D28404DF2PQ/2>.
Murphy, E.. 2015, Inside the cell: the dark side of forensic DNA, First., Nation Books, New York, NY,
USA.
New South Wales, Law Reform Commission, New South Wales & Law Reform Commission 2001,
Surveillance: an interim report, New South Wales Law Reform Commission, Sydney.
OfficerJoeK-9 n.d., ‘Joi’, Off-world: The Blade Runner Wiki, viewed 30 December 2017,
<http://bladerunner.wikia.com/wiki/Joi>.
Orlowski, A. n.d., ‘Cracking copyright law: How a simian selfie stunt could make a monkey out of
Wikipedia’, The Register.
9
Redmon, J., Divvala, S., Girshick, R. & Farhadi, A. 2015, ‘You Only Look Once: Unified, Real-Time
Object Detection’, arXiv:1506.02640 [cs], viewed 14 January 2018,
<http://arxiv.org/abs/1506.02640>.
Sedenberg, E. & Hoffmann, A.L. 2016, ‘Recovering the History of Informed Consent for Data Science
and Internet Industry Research Ethics’, arXiv:1609.03266 [cs], viewed 17 December 2017,
<http://arxiv.org/abs/1609.03266>.
US Copyright Office n.d., Compenduim II of Copyright Office Practices, viewed 17 December 2017,
<http://www.copyrightcompendium.com/>.
Varior, R.R., Haloi, M. & Wang, G. 2016, ‘Gated Siamese Convolutional Neural Network Architecture
for Human Re-Identification’, arXiv:1607.08378 [cs], viewed 13 January 2018,
<http://arxiv.org/abs/1607.08378>.
Wikipedia 2018, ‘Edward Snowden’, Wikipedia, viewed 13 January 2018,
<https://en.wikipedia.org/w/index.php?title=Edward_Snowden&oldid=819863748>.
Wikipedia 2017, ‘Personality rights’, Wikipedia, viewed 30 December 2017,
<https://en.wikipedia.org/w/index.php?title=Personality_rights&oldid=814604845>.
Leave a comment