Uncategorized

Online Content Depicting Child Sexual Abuse Is Growing at an Alarming Rate

“I think it’s one of the most pressing issues of our time… it’s an EPIDEMIC of peadophiles and child sex abusers, both male and FEMALE, at all levels of society, all walks of life, all occupations, all colours and creeds… Now think it is completely normal to be logging onto the internet to view and share images of children being sexually abused… Tortured, raped… Sometimes murdered.

Tens of millions of adults cross the world (US, Canada and UK being the biggest viewers) think that that is completely normal, think they can do it with impunity…

And now it’s got to the stage where they can organise themselves and literally gang stalk anyone who speaks out against it (or their victims)… their homes, their jobs, their family and friends…

How can a society be expected to solve climate and environmental issues, rampant corruption… it can’t even protect its children from being raped by adults! It’s the biggest sign of a society that is about to collapse.

… and I think a lot more women are covering for their husbands or partners than we imagined!

Online Content Depicting Child Sexual Abuse Is Growing at an Alarming Rate

This investigation from the New York Times is stunning.

The amount of online imagery depicting children being sexually abused and exploited is out of control and it’s only getting worse.

In a new report, The New York Times investigated how technology companies and the government are failing to keep what it calls a “criminal underworld” of disturbing, explicit child pornography from spiraling out of control. Last year, tech companies reported finding a record 45 million online photos and videos of the abuse more than twice the volume of what they found the year prior.Advertise with Mother Jones

As the internet has expanded, so has the availability of content that depicts children, some of whom are only three or four years old, being tortured and abused.

“Historically, you would never have gone to a black market shop and asked, ‘I want real hard-core with 3-year-olds,’” said Yolanda Lippert, a prosecutor in Cook County, Ill., who leads a team investigating online child abuse. “But now you can sit seemingly secure on your device searching for this stuff, trading for it.”

Similar content has always been a problem online, just not on this level; a decade ago, the reported number of photos and videos found in a year was only around one million.

Law enforcement agencies assigned to tackle the problem say they are understaffed and underfunded, but technology companies that have done more to enable the spread of horrific and tragic content than they have to stop it are at the heart of the problem.

After years of uneven monitoring of the material, several major tech companies, including Facebook and Google, stepped up surveillance of their platforms. In interviews, executives with some companies pointed to the voluntary monitoring and the spike in reports as indications of their commitment to addressing the problem.

But police records and emails, as well as interviews with nearly three dozen local, state and federal law enforcement officials, show that some tech companies still fall short. It can take weeks or months for them to respond to questions from the authorities, if they respond at all. Sometimes they respond only to say they have no records, even for reports they initiated.

Hany Farid, a professor of digital analytics at the University of California-Berkeley, worked with Microsoft to develop technology in 2009 to detect child sexual abuse material. Farid told the Times that tech companies have been hesitant to dig into the issue.

“The companies knew the house was full of roaches, and they were scared to turn the lights on,” he said. “And then when they did turn the lights on, it was worse than they thought.”

The story is a familiar one for technology companies. Egregious, exploitative content proliferates on their platforms and the companies are often slow to take action. Versions of this have played out over the last several years with hate groups, terrorist groups and other types of abusive and damaging content. Often, if resolution comes, it’s only after public outcry.

New York Times investigation finds massive spike in online child sex abuse

Hands typing on a keyboard.
Photo by Joe Raedle/Getty Images

There were 18.4 million reports of child pornography on the internet last year, which included 45 million images and videos of child sexual abuse, according to an investigation by the New York Times.

Why it matters: Despite tech companies’, law enforcement agencies’ and legislators’ best efforts to prevent the spread of child pornography, the number of reports has exploded over the last 3 decades as technology makes abusive images more accessible and easier to spread.

How it works: The number of child abuse reports has increased in tandem with the rise of encryption technology, specifically encrypted messaging apps.

  • Facebook announced in March plans to encrypt Messenger, which was responsible for nearly 12 million of the 18.4 million child pornography reports last year, according to the Times.
  • Pedophiles use these apps to swap or sell their collections of images and videos.
  • Increasingly, criminals are using encryption technologies to protect websites and imagery from investigators.

By the numbers: In 1998, there were more than 3,000 reports of child sexual abuse imagery. “In 2014, that number surpassed 1 million for the first time,” per the Times.

Context: Congress in 2008 passed the PROTECT Our Children Act, which foresaw many aspects of the proliferation of child pornography. The Times, however, found that the federal government had not fulfilled major aspects of the legislation.

The big picture: The problem is global. Most of the images found last year originated in countries outside the U.S., but the problem is compounded by Silicon Valley, which hosts companies accused of facilitating the spread of child abuse imagery.

  • Yes, but: Those same companies are also the leaders in reporting child pornography to the authorities.

What’s next: paper recently published by the National Center for Missing and Exploited Children suggested that law enforcement agencies and platform operators like Google, Microsoft, Facebook and Twitter may be able to develop software that automatically detects child pornography using machine learning.

Go deeper: Read the full New York Times investigation

</blockqu

Viewing of online child abuse images a ‘social emergency’

  •  08 November 2016
  • UK

Share this with Email Share this with Facebook Share this with Twitter Share this with Whatsapp


laptop keyboard

The numbers of people viewing online child sex abuse images in the UK amount to a “social emergency”, says the NSPCC.

A report by the charity suggests the number of individuals looking at such images could exceed half a million.

It is calling for a “robust action plan” to cut off the supply of content.

The Home Office says it is working with law enforcers, companies and voluntary organisations to stamp out online child exploitation.

In the past five years the number of offences recorded by police of viewing child sexual abuse images under the Obscene Publications Act has more than doubled across the UK, reaching a total of 8,745 in 2015.

But the NSPCC believes the true scale of offending in the UK to be far greater.

By applying the findings of a German population study – which looked into male self-reported sexual interest in children – to a UK scenario, the charity estimates there could be up to half a million men in the UK who have viewed child sexual abuse images. This is based on an estimated internet-using population of 21.63 million men aged 18-89.

This number is much greater than previous estimates.

‘Shocking scale’

In 2013 it was suggested that around 50,000 UK-based individuals were involved in downloading and sharing indecent images of children.

Last month, police chiefs said they feared the number might have risen significantly since then, with one report putting it at up to 100,000.

Simon Bailey, National Police Chiefs’ Council lead for child protection, said the NSPCC’s estimate “highlights the potentially shocking scale of what we are now dealing with”.

On average, 375 offenders were arrested every month, he said.

He added: “We agree with the NSPCC that the police alone cannot stop the demand for child abuse images and more needs to be done to prevent abuse in the first place.”

Stop it Now is a child sexual abuse prevention campaign run by the Lucy Faithfull Foundation. It offers information and support for users of illegal online images. From 13 October 2015 – 31 October 2016, 16,647 users accessed their anonymous self help section of the website.

People can also report illegal content through the UK charity, the Internet Watch Foundation (IWF). In 2015, it removed 68,092 URLs hosting child sexual abuse imagery.

The NSPCC is now calling for:

  • Internet firms operating in the UK to sign up to a set of minimum standards, enforced by a backstop regulatory power.
  • An independent annual audit of the current self-regulatory framework to ensure its effectiveness.
  • Government to produce an annual transparency report on the identification and removal of child abuse images accessed from within the UK.
line

Ann’s story: ‘It was completely traumatic’

Distressed woman
Image captionAnn never suspected her husband (picture posed by model)

“It was just an ordinary day. I was getting the children ready for school and my youngest daughter came to say ‘There’s someone at the door’.

“There was a whole crowd at the door. I was taken to the kitchen. I was questioned about events I had no knowledge of. I found out later that my husband had been using my profile to contact other people.

“They went upstairs, got my husband out of bed and the police then took him away.

“Social workers had come ready to take the children.

“It was completely traumatic. I felt like my life had been turned upside down. Trying to recover from that was very difficult. You just never suspect your own husband.

“The charges became more serious. I was advised to have no contact with him.

“Even up to the trial he didn’t believe he had done anything wrong.

“I would say to people in my position ‘Talk about it. Talk to friends. It’s the secrecy that keeps it going’.”

Ann’s husband was arrested on suspicion of downloading and distributing indecent images of children. He is currently serving a sentence for multiple child sex offences.

(Names have been changed to protect identity)

line

Peter Wanless, chief executive of the NSPCC, recognised progress had been made through the work of the National Crime Agency and the IWF but said more had to be done.

He said: “The sheer numbers of people viewing child sexual abuse images online must be addressed as a social emergency.

“It is two years since government made it a national priority to rid the internet of these vile crimes against children, but today’s report reveals how horrifyingly prolific the problem remains.

“That’s why today we are calling for a robust action plan to cut off the supply of child sexual abuse images in circulation, and deter adults from seeking out child abuse online.

‘Dark corners’

“We should be long past the point when there are dark corners of the internet where these terrible crimes against children are hosted for the pleasure of paedophiles.”

In a statement, the Home Office said: “We remain committed to working with partners in law enforcement, industry and voluntary organisations to stamp out online child sexual exploitation.

“The National Crime Agency has received additional funding of £10 million for further specialist teams, enabling a near doubling of their investigative capability, meaning more children being safeguarded.

“In recognition of the scale and global nature of this crime, the government has led international action on online child sexual exploitation through the WePROTECT Global Alliance, working with countries, the industry, and civil society organisations to develop a co-ordinated response.”

How the spread of child abuse imagery online is changing the debate over encryption

How should we balance freedom of speech versus security?

Illustration by Alex Castro / The Verge

Content warning: This post discusses an investigation into the proliferation of child sexual abuse imagery online.

There are internet problems, and there are platform problems. It’s a distinction I wrote about earlier this year, when trying to think through how tech companies should respond to the Christchurch killing. And it’s a distinction I thought about again this weekend, when I read the New York Times’ disturbing investigation into the rapid spread of child sexual abuse imagery on the internet.

Here’s the high-level overview from reporters Michael H. Keller and Gabriel J.X. Dance:

Pictures of child sexual abuse have long been produced and shared to satisfy twisted adult obsessions. But it has never been like this: Technology companies reported a record 45 million online photos and videos of the abuse last year.

More than a decade ago, when the reported number was less than a million, the proliferation of the explicit imagery had already reached a crisis point. Tech companies, law enforcement agencies and legislators in Washington responded, committing to new measures meant to rein in the scourge. Landmark legislation passed in 2008.

Yet the explosion in detected content kept growing — exponentially.

As you might expect, the investigation explores where to place blame for the growth of this kind of crime. And soon enough it comes to tech platforms — in particular Facebook Messenger. The reporters write:

While the material, commonly known as child pornography, predates the digital era, smartphone cameras, social media and cloud storage have allowed the images to multiply at an alarming rate. Both recirculated and new images occupy all corners of the internet, including a range of platforms as diverse as Facebook Messenger, Microsoft’s Bing search engine and the storage service Dropbox. […]

Encryption and anonymization can create digital hiding places for perpetrators. Facebook announced in March plans to encrypt Messenger, which last year was responsible for nearly 12 million of the 18.4 million worldwide reports of child sexual abuse material, according to people familiar with the reports. Reports to the authorities typically contain more than one image, and last year encompassed the record 45 million photos and videos, according to the National Center for Missing and Exploited Children.

In a Twitter thread, Facebook’s former security chief, Alex Stamos, stood up for his old colleagues here: “I’m glad the NY Times is talking to the incredible people who work on child safety every day,” he wrote. “One point they seem to be a bit confused about: the companies that report the most [child sexual abuse material] are not the worst actors, but the best.” And indeed, if you talk to NCMEC and other organizations who work on this issue, they’ll tell you that they see tech platforms as essential partners in fighting child predators.

But what if tech platforms weren’t such good partners? And what if the reason was encryption?

It’s a tough debate, and it’s one that we’re about to walk straight into the middle of. The reason is Facebook’s plan to encrypt its core messaging apps — Messenger and WhatsApp — by default. The effect of the move on law enforcement’s ability to fight crime is unknown, but certain to be controversial.

I find the fears to be straightforward and rational. Today, thanks to Facebook’s efforts in particular, law enforcement detects millions of cases in which terrible images are being shared around the world. In thousands of cases a year, according to an event I recently attended at Stanford about encryption, this leads to arrests of the perpetrators. But if you were to shield all those messages using encryption, the argument goes, you would essentially be turning a blind eye to a disturbing and growing problem.

To some critics, the circumstances offers cause to dramatically reduce speech on Facebook products. Damon Beres makes his case in OneZero:

It may simply be impossible to moderate the content that is exchanged between all of those people. But maybe there’s a simpler, blunter approach. We take for granted that you can send images, links, and videos on Messenger, but what if you… couldn’t? What if we’ve gotten the cost-benefit of being able to send a video on such a large, central platform wrong? Messenger could simply be text-based, as old messaging services were: Easier to moderate automatically, and without the risk of harmful videos or images being distributed. There’s an even stronger argument that the same calculus might be applied to Live videos on Facebook, which have previously allowed people to broadcast shooting rampages and suicides. True, some users would go elsewhere, the content would persist in some fashion, but it would not be supported by the dominant social network. There is a chance, at least, that its creation and distribution would be impeded in some way, especially if other companies followed suit.

I’m sure the idea of banning all link- and image-sharing in Messenger will find favor in, for example, authoritarian governments. Just imagine the nettlesome dissent that gets spread via links and images! And yet it also seems notable that not even Russia or China have taken such an extreme step — they have instead ramped up their dystopian surveillance operations in an effort to root out dissent at the source.

In a more measured (and members-only) post, Ben Thompson still takes a dim view of Facebook’s plans for the default encryption of its messaging apps:

Evil folks will always be able to figure out the most efficient way to be evil. The question, though, is how much friction do we want to introduce into the process? Do we want to make it the default that the most user-friendly way to discover your “community”, particularly if that community entails the sexual abuse of children, is by default encrypted? Or is it better that at least some modicum of effort — and thus some chance that perpetrators will either screw up or give up — be necessary?

To take this full circle, I find those 12 million Facebook reports to be something worth celebrating, and preserving. But, if Zuckerberg follows through with his “Privacy-Focused Vision for Social Networking”, the opposite will occur.

To state the obvious: the trade-offs involved in the discussion of encryption vs. security are agonizing. It’s easy to defend encryption in the context of most private discussions between adults, whether it’s dissent against the government or of a more personal nature. It’s much harder to defend encryption when it’s being used to share images of child abuse, or to plan terrorist acts. And we lack easy methods for balancing the risks versus the benefits. How much freedom does an encrypted messaging platform have to support, to make up for the terrorism that it might contribute to? How do you design that test?

One way we can approach the problem is by thinking about it in terms of internet problems versus platform problems. As I wrote earlier this year:

Platform problems include the issues endemic to corporations that grow audiences of billions of users, apply a light layer of content moderation, and allow the most popular content to spread virally using algorithmic recommendations. Uploads of the attack that collect thousands of views before they can be removed are a platform problem. Rampant Islamophobia on Facebook is a platform problem. Incentives are a platform problem. Subreddits that let you watch people die were a platform problem, until Reddit axed them over the weekend.

Internet problems include the issues that stem from the existence of a free and open network connecting all of humanity together. The existence of forums that allow white supremacists to meet, recruit new believers, and coordinate terrorist attacks is an internet problem. The proliferation of free file-sharing sites that allow users to post copies of gruesome videos is an internet problem. The rush of some tabloids to publish their own clips of the shooting, or analyze the alleged killer’s manifesto, are an internet problem.

Viewed this way, I see the spread of child abuse imagery online as much more of an internet problem than it is a platform problem. It’s true that platforms provide an easy way to disseminate this content — but it’s also true that predators have many, many alternatives to Messenger, and actively use them. I’ll never forget the shudder of a person who used to work at the Tor Project when they told me that a meaningful percentage of the site’s users at any given time appeared to be actively engaged in sharing child abuse imagery.

And that’s to say nothing of the other big platforms where child abuse imagery lives. These files exist and are transmitted on iOS, Android, Mac, and Windows, to name four big ones. Should we compel those platforms to scan user screens periodically and check them against hash lists of known child abuse imagery? It’s possible to do that without involving the encryption debate at all — users’ screens aren’t encrypted. Does that make it a better idea, or a worse one?

Child abuse imagery is an internet problem because it’s fundamentally about how the friction involved in bad people meeting one another, and enacting awful schemes, has now dropped to zero. You could close every big tech company on earth and, assuming the the TCP/IP protocol still existed, still find that child abuse imagery was spreading around the world.

In the meantime — happily — it’s an internet problem that tech platforms have worked actively to solve. I’m sure they could work harder and do more, but it’s notable that at a time when people hate platforms for almost everything, the people closest to the subject — the FBI and NCMEC, to name two — seem genuinely pleased with the partnerships they have. It might not be possible to ramp these efforts up, or even preserve them as is, in a world where encrypted communications are the default.

But it’s also worth trying. These images will continue to proliferate around the internet regardless which platforms are currently dominant. To focus narrowly on the question of how they are transmitted lets a great many people — and companies — off the hook. A solution that preserves encryption while automatically checking shared images or links for connections to known child abuse imagery and reporting it to law enforcement might not be possible. But before we give up on the idea of private communication online, we ought to look for one.

Leave a Reply

Fill in your details below or click an icon to log in:

WordPress.com Logo

You are commenting using your WordPress.com account. Log Out /  Change )

Google photo

You are commenting using your Google account. Log Out /  Change )

Twitter picture

You are commenting using your Twitter account. Log Out /  Change )

Facebook photo

You are commenting using your Facebook account. Log Out /  Change )

Connecting to %s