artificial intelligence, Child Pornography, Online Child Abuse, Uncategorized

‘epidemic’ of online child sexual abuse… ooh a competition ‘Great’ Britain! (Lancaster University)

“Seriously, the British public openly support online child pornography! Blatantly and openly show all support for it… with two fingers to anyone who tries to combat it… the Brits are ranked third in the world for viewing online child porn… it is an epidemic… MILLIONS OF THEM! Our society has completely and utterly failed in every way!”
Artificial Intelligence? … because only a society as warped and sick as the Brits do we have to develop an artificial intelligence to prevent us from looking at sexually explicit pictures of our own children!… fucking freaks!
(In Israel it is actually illegal NOT to report child abuse… just saying)

Police ‘under-resourced’ to tackle ‘epidemic’ of online child sexual abuse

A new report describes the police’s adoption of new technology as an “utter mess” and concludes forces “urgently need more money”.

Disrupting online abuse and exploitation: call for solutions

Organisations can apply for a share of up to £300,000 for contracts to disrupt and prevent the live online streaming of child sexual abuse.
The Home Office has up to £300,000 to invest in up to 5 innovative projects that are designed to disrupt live online child sexual abuse and exploitation.
Projects should investigate new methods and technologies. This may include detecting and disrupting live streaming or identifying and disrupting related financial transactions. It may also include other interventions such as psychological or behavioural approaches.
The competition is being run under the SBRI (Small Business Research Initiative).
Find out more about SBRI and how it works.

Identification, disruption and prevention
This competition is particularly seeking technical and psychological solutions for identification, disruption and prevention. It could include:

  • identification of live streams or associated chat logs, both in real time or from an archived live-stream content
  • making use of any wider indicators of child sexual abuse and exploitation in order to identify and disrupt live-streamed content
  • supporting identification of potential victims or offenders by hosting providers
  • deterrence of potential offenders and preventing children becoming potential victims through behavioural insights or targeted communications

Competition information

  • The competition is open, and the deadline for applications is 14 November 2018
  • It is open to any organisation that can demonstrate a route to market for its idea
    we expect total project costs to be up to £60,000 and for projects to last up to 3.5 months
  • successful projects will attract 100% funded development contracts
  • applications should be made through the Crown Commercial Service’s e-Sourcing Suite

    lancaster

Artificial intelligence toolkit spots new child sexual abuse media online (Lancaster University)

1 December 2016
New artificial intelligence software designed to spot new child sexual abuse media online could help police catch child abusers.

The toolkit, described in a paper published in Digital Investigation, automatically detects new child sexual abuse photos and videos in online peer-to-peer networks.

The research behind this technology was conducted in the international research project iCOP – Identifying and Catching Originators in P2P Networks – founded by the European Commission Safer Internet Program by researchers at Lancaster University, the German Research Center for Artificial Intelligence (DFKI), and University College Cork, Ireland.
There are hundreds of searches for child abuse images every second worldwide, resulting in hundreds of thousands of child sexual abuse images and videos being shared every year. The people who produce child sexual abuse media are often abusers themselves – the US National Center for Missing and Exploited Children found that 16 percent of the people who possess such media had directly and physically abused children.

Spotting newly produced media online can give law enforcement agencies the fresh evidence they need to find and prosecute offenders. But the sheer volume of activity on peer-to-peer networks makes manual detection virtually impossible. The new toolkit automatically identifies new or previously unknown child sexual abuse media using artificial intelligence.

“Identifying new child sexual abuse media is critical because it can indicate recent or ongoing child abuse,” explained Claudia Peersman, lead author of the study from Lancaster University’s School of Computing and Communications. “And because originators of such media can be hands-on abusers, their early detection and apprehension can safeguard their victims from further abuse.”

There are already a number of tools available to help law enforcement agents monitor peer-to-peer networks for child sexual abuse media, but they usually rely on identifying known media. As a result, these tools are unable to assess the thousands of results they retrieve and can’t spot new media that appear.~

The iCOP toolkit uses artificial intelligence and machine learning to flag new and previously unknown child sexual abuse media. The new approach combines automatic filename and media analysis techniques in an intelligent filtering module. The software can identify new criminal media and distinguish it from other media being shared, such as adult pornography.

The researchers tested iCOP on real-life cases and law enforcement officers trialed the toolkit. It was highly accurate, with a false positive rate of only 7.9% for images and 4.3% for videos. It was also complementary to the systems and workflows they already use. And since the system can reveal who is sharing known child sexual abuse media, and show other files shared by those people, it will be highly relevant and useful to law enforcers.

“When I was just starting as a junior researcher interested in computational linguistics, I attended a presentation by an Interpol police officer who was arguing that the academic world should focus more on developing solutions to detect child abuse media online,” said Peersman. “Although he clearly acknowledged that there are other crimes that also deserve attention, at one point he said: ‘You know those sweet toddler hands with dimple-knuckles? I see them online… every day.’ From that moment I knew I wanted to do something to help stop this. With iCOP we hope we’re giving police the tools they need to catch child sexual abusers early based on what they’re sharing online.”

Warning follows report into online child sexual abuse risk (Lancaster University)

22 January 2018
If the public are serious about wanting to protect children from online sexual abuse more investment in skilled professionals is needed now.

The stark warning comes from researchers following publication of a new report commissioned by the Independent Inquiry on Child Sexual Abuse (IICSA) which coincides with the first day of the public hearing into online child sexual abuse.

Victims of online facilitated child sexual abuse often remain undetected until images or videos of their abuse are picked up by criminal investigations.

And warns Professor Corinne May-Chahal, of Lancaster University, who led this piece of research: “These involve a growing number of children but the resources needed to detect abuse, trace victims and help themget support is very limited.”

She added that a key question asked by parents and those working with children is what madea child at risk of online sexual abuse.

The Lancaster report, published today (January 22) examines what is known about the characteristics, vulnerabilities and on-and-offline behaviour of victims of online-facilitated child sexual abuse and exploitation.

Whilst any child, even younger children, could be at risk, what is less known is that risk of direct contact is equally likely to come from people the child knows as well as from strangers.

Online-facilitated Child Sexual Abuse (OFCSA) is a growing area of concern and includes a wide range of actions and events ranging from feeling upset by viewing sexual content online to live streaming sexual acts at the request of a perpetrator. It also includes the recording of offline CSA.

Sexting is not always abusive but, if images are shared without permission or distributed into peer to peer networks, it can be.

Researchers also found:
Most studies suggest girls are at higher risk but this may be because boys are less likely to admit to OFCSA.

Vulnerability characteristics included: –
– Adverse childhood experiences, such as physical and sexual abuse and exposure to parental conflict, made children more vulnerable to online victimisation. However, any child from any socio-economic background can fall victim to OFCSA
– Disability, particularly disabled boys who may be at equal or greater risk to girls
– Above average internet use increased vulnerability especially when interacting with other characteristics, such as having a disability or low self-esteem (at the time of the research 12-15 year olds spent on average just over 2.5 hrs per day but this is increasing all the time (now according to Ofcom 2017 they spend 21 hours a week online)
– Children exploring gender identities online may be more vulnerable
What protects children online:
– In managing unwanted experiences, many children develop important digital skills that contribute to their overall resilience
– The more upset or distressed a child is by online-facilitated CSA the more likely they are to tell others; usually friends or parents
– Children are unlikely to tell others if they are embarrassed or afraid

IICSA launched 13 investigations into a broad range of institutions. One of the investigations focuses on the institutional responses to child sexual abuse and exploitation facilitated by the Internet.

The Lancaster University research, a ‘rapid evidence assessment’, provides an overview of the current state of evidence on a selected topic and is one of several reports published today.

Rapid Evidence Assessment 
Characteristics and vulnerabilities of victims of online-facilitated child sexual abuse and exploitation (Lancaster University)

Pioneering New Work in Online Child Protection

Research by Professor Awais Rashid of the University of Lancaster
In collaboration with Dr James Walkerdine & Dr Phil Greenwood of Relative Insight and Dr Paul Rayson & Dr Alistair Baron of the University of Lancaster.

Big Data linguistic research is pioneering a new field in online child protection called Digital Persona Analysis (DPA), automating the process of detecting sexual predators online who masquerade as children.

The research, accredited to the Global Uncertainties Programme (predecessor to PaCCS), was led by Professor Awais Rashid of the University of Lancaster and funded by the Engineering Physical Sciences Research Council (EPSRC) and Economic Social Research Council (ESRC).

More children than ever are using social media and network sites online, increasing the number of children at risk from sexual predators. Online child protection is now a key concern.

DPA analyses vast quantities of data at a higher level, reporting individual personas and behaviours to investigators. It exploits the “Isis Toolkit” (part of a larger Isis project), that aimed to detect criminals who hid behind multiple identities (mainly adults posing as children). This research built on internationally-leading work on corpus comparison techniques that used statistical natural language analysis to look at the conversational behaviour of the British population in the 1990s. The majority of the research utilised a semantic analysis where keywords are characterised based on contextual information. This created the ability to operate in the face of noisy language data and deceptive behaviour; hence enabling the Isis Toolkit to detect masquerading tactics with a high degree of accuracy.

‘The Isis Toolkit can detect, with an accuracy of 94%, when an adult is masquerading as a child compared to children participating in controlled experiments.’

IMPACTS

This highly interdisciplinary research (combining computer and behavioural science with linguistics) has led to collaboration between departments within the university and a wide range of other communities outside academia. This work has delivered impact in four different areas:

Commercialisation

The Isis Toolkit has been licenced by spin-out company, Relative Insight, where it has been successfully incorporated within broader security and child protection commercial offerings. The generalisation of the technology and the scaling up of the analysis capabilities via a cloud-enabled API, now allows for it to be applied to any type of investigative activity where digital text needs to be analysed. In addition to the security domain, the company has further diversified and uses the technology to support brands and advertising agencies extract insights from digital sources in order to better understand their target consumers. Due to the success of the latter, Relative Insight has grown rapidly and now employs 13 people with clients including Havas, Saatchi & Saatchi, Ogilvy, Twitter and Microsoft Mobile.

Law Enforcement

Successful live trials with UK Police Forces, along with the Child Exploitation & Online Protection, have demonstrated the accuracy of the Isis Toolkit on real data sets while also decreasing the amount of analysis time required.

“[The Isis toolkit] provides the ability to focus analysis on specific information [and] allows investigations to be more focused and therefore potential victims of grooming or contact abuse to be identified more easily”
Quote from the evaluation of the aforementioned live trials.

The toolkit has also been licensed for use by the Canadian Royal Mounted Police who see this research as an ‘operational necessity’. Large-scale agreements with other international customers could emerge in the future.

Education

Through developing the Toolkit, researchers have created and delivered internet safety lessons to over 500 students. This has generated strong links with teachers leading to the development of comprehensive lesson plans on e-safety topics for Key Stages 2-5 that have been rolled out across the region through the South Lakes Teaching School Alliance (SLTSA). In fact, the Isis Toolkit has been deployed worldwide through the release of the free iTunes app called ChildDefence, empowering children to protect themselves online. This was built upon in 2014 when it formed the basis of one of the WeProtect projects, an online child protection initiative set up by Prime Minister David Cameron.

Internet Governance

The research led to a policy paper prepared for the Chartered Institute for IT (BCS), and presented to Alun Michael, MP in 2009. This paper subsequently was selected as the single UK contribution to the 2009 and 2010 Internet Governance Forums (in Sharm-Al-Sheikh and Vilnius respectively). It also provided written evidence to the Commons Select Committee on Education (2010), as well as contributing to the Proposal for a Directive of the European Parliament and of the Council on combating the sexual abuse, sexual exploitation of children and child pornography, repealing Framework Decision 2004/68/JHA (COM/2010/0094).

A report requested by the European Parliament’s Committee on Gender Equality used the Isis toolkit as a case study to support the use of a modified toolkit in assisting in the detection and management of cyber coercion and rape of women and girls.
In just raising awareness of online protection issues this research inevitably leads to an impact on the general public. It is crucially important in this area of online child protection to impact both public policy and to inform the general public.

AI toolkit can help police trace child sexual abuse online

London, Dec 3 (IANS) A new artificial intelligence (AI) toolkit ‘iCOP’ has been designed to identify child sexual abuse content online that can lead police to catch the abusers, researchers report.

According to the study published in the journal Digital Investigation, the ‘iCOP’ toolkit automatically identifies new or previously unknown child sexual abuse media using AI.

“With iCOP, we hope we are giving police the tools they need to catch child sexual abusers early based on what t

Warning follows report into online child sexual abuse risk

If the public are serious about wanting to protect children from online sexual abuse more investment in skilled professionals is needed now.The stark warning comes from researchers following publication of a new report commissioned by the Independent Inquiry on Child Sexual Abuse (IICSA) which coincided with the first day of the public hearing into online child sexual abuse.

Victims of online facilitated child sexual abuse often remain undetected until images or videos of their abuse are picked up by criminal investigations.
And warns Professor Corinne May-Chahal, of Lancaster University, who led this piece of research: “These involve a growing number of children but the resources needed to detect abuse, trace victims and help them get support is very limited.”She added that a key question asked by parents and those working with children is what made a child at risk of online sexual abuse.
The Lancaster report, examines what is known about the characteristics, vulnerabilities and on-and-offline behaviour of victims of online-facilitated child sexual abuse and exploitation.
Whilst any child, even younger children, could be at risk, what is less known is that risk of direct contact is equally likely to come from people the child knows as well as from strangers.
Online-facilitated CSA (OFCSA) is a growing area of concern and includes a wide range of actions and events ranging from feeling upset by viewing sexual content online to live streaming sexual acts at the request of a perpetrator. It also includes the recording of offline CSA.
Sexting is not always abusive but, if images are shared without permission or distributed into peer to peer networks, it can be.
Researchers also found:
* Most studies suggest girls are at higher risk but this may be because boys are less likely to admit to OFCSA.
* Vulnerability characteristics included:
– Adverse childhood experiences, such as physical and sexual abuse and exposure to parental conflict, made children more vulnerable to online victimisation. However, any child from any socio-economic background can fall victim to OFCSA
– Disability, particularly disabled boys who may be at equal or greater risk to girls
– Above average internet use increased vulnerability especially when interacting with other characteristics, such as having a disability or low self-esteem (at the time of the research 12-15 year olds spent on average just over 2.5 hrs per day but this is increasing all the time (now according to Ofcom 2017 they spend 21 hours a week online)
– Children exploring gender identities online may be more vulnerable
* What protects children online:
– In managing unwanted experiences, many children develop important digital skills that contribute to their overall resilience
– The more upset or distressed a child is by online-facilitated CSA the more likely they are to tell others; usually friends or parents
– Children are unlikely to tell others if they are embarrassed or afraid
IICSA launched 13 investigations into a broad range of institutions. One of the investigations focuses on the institutional responses to child sexual abuse and exploitation facilitated by the Internet.
The Lancaster University research, a ‘rapid evidence assessment’, provides an overview of the current state of evidence on a selected topic and is one of several reports published today.

Toddler hand inspired AI child sex abuse tool

The fight to rid the web of images of child abuse has gained a new tool – in the form of artificial intelligence.
The AI toolkit, inspired by photos of a toddler’s hand, can automatically detect new child sexual abuse photos and videos in online networks.
Spotting newly produced media can give law enforcement agencies the evidence they need to find and prosecute offenders, researchers said.
The system is freely available to law enforcement agencies.
It is already being used in several European countries.
The research was carried out as part of the international research project iCOP (identifying and catching originators in peer-to-peer networks), which was founded by the European Commission Safer Internet Programme.
It was carried out by researchers at Lancaster University, the German Research Centre for Artificial Intelligence and University College Cork in Ireland.
Lead researcher Claudia Peersman, from Lancaster University, explained what inspired her to develop the system.
“When I was just starting as a junior researcher interested in computational linguistics, I attended a presentation by an Interpol police officer who was arguing that the academic world should focus more on developing solutions to detect child abuse media online,” she said.
“Although he clearly acknowledged that there are other crimes that also deserve attention, at one point he said: ‘You know those sweet toddler hands with dimple-knuckles. I see them online every day’. From that moment I knew I wanted to do something to help stop this.”
Early detection
It works using a combination of file name analysis – picking up typical filenames used by paedophiles such as ch1ld. These cannot be picked up by standard computer analysis and while they are easily spotted by humans, the sheer volume of images makes it impossible for law enforcers to find every file.
The software can also identify specialised vocabulary commonly used by paedophiles and associated with images, such as Lolita, inspired by a Vladamir Nabokov novel about a middle-aged man who becomes obsessed with a young girl.
The second element of the toolkit is image analysis. The AI software can spot images of children via things such as subtle differences in skin colour compared to adults or by spotting movements associated with sexual abuse.
Hundreds of thousands of child sexual abuse images and videos are being shared every year. There are already a number of tools available to help law enforcement agents monitor peer-to-peer networks for child abuse media, but they usually rely on identifying known media.
“Identifying new child sexual abuse media is critical because it can indicate recent or ongoing child abuse,” said Ms Peersman.
“And because originators of such media can be hands-on abusers, their early detection and apprehension can safeguard victims from further abuse.”
Tests of the toolkits on real images of child sexual abuse appeared to be highly accurate, with a false positive rate of 7.9% for images and 4.3% for videos, according to the researchers.
Related Topics

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google+ photo

You are commenting using your Google+ account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s