Anti-gravity, artificial intelligence, astro-physics, Chemistry, cosmology, Dark Matter, DNA, energy, Environment, Genetics, GUT-CP, hydrides, hydrino, HydrinoEconomy, Millsian, Molecular modelling, New elements, particle physics, Randell Mills, SunCell, technology

Science On Tap… … TEL AVIV! ;D

“I’m thinking maybe Tel Aviv? The Weizmann Institute of Science?… some other people. Does Tel Aviv not host an annual piss up, with science lectures in bars?”
Science On Tap
“That’s the one!… yeah… in May? I was going to attend an Ayahuasca conference in Spain, but I’m going to go and check Israel out instead… go and do Tel Aviv and Haifa… see if I can fit in some Krav Maga training whilst I’m there” ;D~

Does anyone from the UK have a problem with me visiting Israel? Oh wait…
…………………./´¯/)
………………..,/¯../
………………./…./
…………./´¯/’…’/´¯¯`·¸
………./’/…/…./……./¨¯\
……..(‘(…´…´…. ¯~/’…’)
………\……………..’…../
……….”…\………. _.·´
…………\…………..(
…………..\………….\…

tel1.png

Science on Tap 2017
An influential initiative, creating a new urban culture that has been adopted around the globe – presented by Yivsam Azgad, Spokesman and Curator of the Weizmann Institute of Science, ISRAEL.
What is it? About 60 leading scientists and outstanding PhD students appear – on the same day, same hour – in bars and cafes around the city for informal talks with the patrons on the open scientific questions in their fields, on the sense of discovery, and on life on the “frontier” of science.
Are there parallel realities? Do dark energy and dark matter rule the Universe? How did life originate? Can we build a brain? Is nuclear fusion the solution to our energy problems? What do we mean by “personalized medicine?” Why do stars explode? Are we truly made of stardust? These are just a few of the questions that the scientists discuss.
tel

Science on Tap 2018
Dozens of top scientists and outstanding research students from the Weizmann Institute of Science will be in 51 bars in Tel Aviv to talk with the patrons.
tel2

In Tel Aviv, Quantum Physics Is Taught Over a Glass of Beer
Scientists raid Tel Aviv bars for one night a year, as part of the successful tradition called ‘Science on Tap.’
tel3

Weizmann Institute Of Science
WIZ11.jpg

HERE’S SOMETHING INTERESTING FROM WEIZMANN INSTITUTE!…

Plants rule
Prof. Ron Milo’s lab reveals stunning insights about Earth’s biomass
While humans make up just a tiny fraction—only 0.01 percent—of the mass of all living things, we are responsible for a hefty amount of destruction across other species.
Human activity has caused a decline in the total biomass of wild mammals—both marine and terrestrial— by a factor of six, or over 80 percent, since the dawn of civilization, according to a new Weizmann Institute-led study. Meanwhile, the total plant biomass has declined twofold since the emergence of people on the planet due to the cutting down of forests.
But the surprising finding was that plants still rule the Earth—comprising about 450 gigatons of carbon (Ct C) out of the total 550 Gt C of biomass on Earth. In comparison, humans make up a staggeringly low 0.06 Gt C, despite our enormous impact.
The study, recently published in the Proceedings of the National Academy of Sciences, includes a census of the total biomass distributed among all kingdoms of life. Performed by PhD student Yinon Bar-On from Prof. Ron Milo’s laboratory in the Department of Plant and Environmental Sciences, in collaboration with Caltech Prof. Ron Phillips, the research provides a holistic view of the biosphere’s composition while characterizing patterns according to taxonomic categories, geography, and nutrition. To assemble the census, the scientists conducted extensive analyses based on hundreds of existing studies.
Another insight from the study was that while the biomass of wild animals has declined steeply, the total mass of mammals—including humans and livestock—increased fourfold.
“Over the relatively short span of human history, major innovations, such as the domestication of livestock, the adoption of an agricultural lifestyle, and the Industrial Revolution, have increased the human population dramatically and have had radical ecological effects,” the authors observe. “The impact of human civilization on global biomass has not been limited to mammals but has also profoundly reshaped the total quantity of carbon sequestered by plants.”

Yeah anyway… Tel Aviv!
Tel_Aviv-Yafo_Marina

Tel Aviv Among World’s ‘Heavyweight’ Tech Hubs, Says New Report
While Silicon Valley is still “off the charts” as a global tech hub, Beijing and Shanghai are catching up and Tel Aviv is among the most influential and most international, with more deals involving foreign investors, according to a new report this week by New York-based research firm CB Insights.

Silicon Wadi
Silicon Wadi[1] (Hebrew: סיליקון ואדי‎, lit: “Silicon Valley”) is an area with a high concentration of high-technology companies on the coastal plain of Israel, similar to Silicon Valley in the U.S. state of California, and is the reason Israel is nicknamed the Start-Up Nation.[2][3] The area covers much of the country, although especially high concentrations of high-tech industry can be found in the area around Tel Aviv, including small clusters around the cities of Ra’anana, Petah Tikva, Herzliya, Netanya, the academic city of Rehovot and its neighbour Rishon Le Zion. In addition, high-tech clusters can be found in Haifa and Caesarea. More recent high-tech establishments have been raised in Jerusalem, and in towns such as Yokneam Illit and Israel’s first “private city,” Airport City, near Tel Aviv.
israel60_medium

Tel Aviv Startup City
Ranked one of the world’s leading innovative cities, Tel Aviv is at the heart of the global startup scene. Through its vast resources, top talent, highest level of venture capital per capita, and non-stop culture, Tel Aviv is the place to be to create the next big project. Tel Aviv welcomes all ideas and startups no matter the size and will support you in your journey. The city, with its fast-paced nightlife and unforgettable environment, breeds the best innovation, uniqueness, and creativity.
TelAviv_1920x1080.jpg

The Rise Of Tel Aviv’s Tech Hub
Tel Aviv has long been the epicentre of Israel’s bustling high tech scene. The latest trend overtaking the startup world is shared workplaces dedicated for techies and young professionals alike to work in, and Tel Aviv’s famously (un)corporate work culture is leading the way. Built around open spaces and geared towards networking, these hubs range from so-called “accelerators” run by investors, to small communal offices for freelancers and creatives. In Tel Aviv, you can find a wide variety of both, with big firms like Microsoft setting up shop alongside hip young workspaces, perfect for the gig economy.

Tel Aviv Tech Hub May Be Small, But It Leads with Large Exits, Report Says
A new report by research firm CB Insights says that among the world’s six heavyweight tech hubs, Tel Aviv has the lowest number, highest quality of deals.

10 disruptive Israeli companies that can wean the world off fossil fuels
Solar, water, geothermal and wind power, battery techs and electric-car components are areas where Israelis are leading the renewable revolution.

Energetics Technology Ltd.
energetics

ארה”ב רוצה לפתח לייזרים עם ישראל. והיא מקצה לזה 25
מליוני דולרים

artificial intelligence, Child Pornography, Online Child Abuse, Uncategorized

‘epidemic’ of online child sexual abuse… ooh a competition ‘Great’ Britain! (Lancaster University)

“Seriously, the British public openly support online child pornography! Blatantly and openly show all support for it… with two fingers to anyone who tries to combat it… the Brits are ranked third in the world for viewing online child porn… it is an epidemic… MILLIONS OF THEM! Our society has completely and utterly failed in every way!”
Artificial Intelligence? … because only a society as warped and sick as the Brits do we have to develop an artificial intelligence to prevent us from looking at sexually explicit pictures of our own children!… fucking freaks!
(In Israel it is actually illegal NOT to report child abuse… just saying)

Police ‘under-resourced’ to tackle ‘epidemic’ of online child sexual abuse

A new report describes the police’s adoption of new technology as an “utter mess” and concludes forces “urgently need more money”.

Disrupting online abuse and exploitation: call for solutions

Organisations can apply for a share of up to £300,000 for contracts to disrupt and prevent the live online streaming of child sexual abuse.
The Home Office has up to £300,000 to invest in up to 5 innovative projects that are designed to disrupt live online child sexual abuse and exploitation.
Projects should investigate new methods and technologies. This may include detecting and disrupting live streaming or identifying and disrupting related financial transactions. It may also include other interventions such as psychological or behavioural approaches.
The competition is being run under the SBRI (Small Business Research Initiative).
Find out more about SBRI and how it works.

Identification, disruption and prevention
This competition is particularly seeking technical and psychological solutions for identification, disruption and prevention. It could include:

  • identification of live streams or associated chat logs, both in real time or from an archived live-stream content
  • making use of any wider indicators of child sexual abuse and exploitation in order to identify and disrupt live-streamed content
  • supporting identification of potential victims or offenders by hosting providers
  • deterrence of potential offenders and preventing children becoming potential victims through behavioural insights or targeted communications

Competition information

  • The competition is open, and the deadline for applications is 14 November 2018
  • It is open to any organisation that can demonstrate a route to market for its idea
    we expect total project costs to be up to £60,000 and for projects to last up to 3.5 months
  • successful projects will attract 100% funded development contracts
  • applications should be made through the Crown Commercial Service’s e-Sourcing Suite

    lancaster

Artificial intelligence toolkit spots new child sexual abuse media online (Lancaster University)

1 December 2016
New artificial intelligence software designed to spot new child sexual abuse media online could help police catch child abusers.

The toolkit, described in a paper published in Digital Investigation, automatically detects new child sexual abuse photos and videos in online peer-to-peer networks.

The research behind this technology was conducted in the international research project iCOP – Identifying and Catching Originators in P2P Networks – founded by the European Commission Safer Internet Program by researchers at Lancaster University, the German Research Center for Artificial Intelligence (DFKI), and University College Cork, Ireland.
There are hundreds of searches for child abuse images every second worldwide, resulting in hundreds of thousands of child sexual abuse images and videos being shared every year. The people who produce child sexual abuse media are often abusers themselves – the US National Center for Missing and Exploited Children found that 16 percent of the people who possess such media had directly and physically abused children.

Spotting newly produced media online can give law enforcement agencies the fresh evidence they need to find and prosecute offenders. But the sheer volume of activity on peer-to-peer networks makes manual detection virtually impossible. The new toolkit automatically identifies new or previously unknown child sexual abuse media using artificial intelligence.

“Identifying new child sexual abuse media is critical because it can indicate recent or ongoing child abuse,” explained Claudia Peersman, lead author of the study from Lancaster University’s School of Computing and Communications. “And because originators of such media can be hands-on abusers, their early detection and apprehension can safeguard their victims from further abuse.”

There are already a number of tools available to help law enforcement agents monitor peer-to-peer networks for child sexual abuse media, but they usually rely on identifying known media. As a result, these tools are unable to assess the thousands of results they retrieve and can’t spot new media that appear.~

The iCOP toolkit uses artificial intelligence and machine learning to flag new and previously unknown child sexual abuse media. The new approach combines automatic filename and media analysis techniques in an intelligent filtering module. The software can identify new criminal media and distinguish it from other media being shared, such as adult pornography.

The researchers tested iCOP on real-life cases and law enforcement officers trialed the toolkit. It was highly accurate, with a false positive rate of only 7.9% for images and 4.3% for videos. It was also complementary to the systems and workflows they already use. And since the system can reveal who is sharing known child sexual abuse media, and show other files shared by those people, it will be highly relevant and useful to law enforcers.

“When I was just starting as a junior researcher interested in computational linguistics, I attended a presentation by an Interpol police officer who was arguing that the academic world should focus more on developing solutions to detect child abuse media online,” said Peersman. “Although he clearly acknowledged that there are other crimes that also deserve attention, at one point he said: ‘You know those sweet toddler hands with dimple-knuckles? I see them online… every day.’ From that moment I knew I wanted to do something to help stop this. With iCOP we hope we’re giving police the tools they need to catch child sexual abusers early based on what they’re sharing online.”

Warning follows report into online child sexual abuse risk (Lancaster University)

22 January 2018
If the public are serious about wanting to protect children from online sexual abuse more investment in skilled professionals is needed now.

The stark warning comes from researchers following publication of a new report commissioned by the Independent Inquiry on Child Sexual Abuse (IICSA) which coincides with the first day of the public hearing into online child sexual abuse.

Victims of online facilitated child sexual abuse often remain undetected until images or videos of their abuse are picked up by criminal investigations.

And warns Professor Corinne May-Chahal, of Lancaster University, who led this piece of research: “These involve a growing number of children but the resources needed to detect abuse, trace victims and help themget support is very limited.”

She added that a key question asked by parents and those working with children is what madea child at risk of online sexual abuse.

The Lancaster report, published today (January 22) examines what is known about the characteristics, vulnerabilities and on-and-offline behaviour of victims of online-facilitated child sexual abuse and exploitation.

Whilst any child, even younger children, could be at risk, what is less known is that risk of direct contact is equally likely to come from people the child knows as well as from strangers.

Online-facilitated Child Sexual Abuse (OFCSA) is a growing area of concern and includes a wide range of actions and events ranging from feeling upset by viewing sexual content online to live streaming sexual acts at the request of a perpetrator. It also includes the recording of offline CSA.

Sexting is not always abusive but, if images are shared without permission or distributed into peer to peer networks, it can be.

Researchers also found:
Most studies suggest girls are at higher risk but this may be because boys are less likely to admit to OFCSA.

Vulnerability characteristics included: –
– Adverse childhood experiences, such as physical and sexual abuse and exposure to parental conflict, made children more vulnerable to online victimisation. However, any child from any socio-economic background can fall victim to OFCSA
– Disability, particularly disabled boys who may be at equal or greater risk to girls
– Above average internet use increased vulnerability especially when interacting with other characteristics, such as having a disability or low self-esteem (at the time of the research 12-15 year olds spent on average just over 2.5 hrs per day but this is increasing all the time (now according to Ofcom 2017 they spend 21 hours a week online)
– Children exploring gender identities online may be more vulnerable
What protects children online:
– In managing unwanted experiences, many children develop important digital skills that contribute to their overall resilience
– The more upset or distressed a child is by online-facilitated CSA the more likely they are to tell others; usually friends or parents
– Children are unlikely to tell others if they are embarrassed or afraid

IICSA launched 13 investigations into a broad range of institutions. One of the investigations focuses on the institutional responses to child sexual abuse and exploitation facilitated by the Internet.

The Lancaster University research, a ‘rapid evidence assessment’, provides an overview of the current state of evidence on a selected topic and is one of several reports published today.

Rapid Evidence Assessment 
Characteristics and vulnerabilities of victims of online-facilitated child sexual abuse and exploitation (Lancaster University)

Pioneering New Work in Online Child Protection

Research by Professor Awais Rashid of the University of Lancaster
In collaboration with Dr James Walkerdine & Dr Phil Greenwood of Relative Insight and Dr Paul Rayson & Dr Alistair Baron of the University of Lancaster.

Big Data linguistic research is pioneering a new field in online child protection called Digital Persona Analysis (DPA), automating the process of detecting sexual predators online who masquerade as children.

The research, accredited to the Global Uncertainties Programme (predecessor to PaCCS), was led by Professor Awais Rashid of the University of Lancaster and funded by the Engineering Physical Sciences Research Council (EPSRC) and Economic Social Research Council (ESRC).

More children than ever are using social media and network sites online, increasing the number of children at risk from sexual predators. Online child protection is now a key concern.

DPA analyses vast quantities of data at a higher level, reporting individual personas and behaviours to investigators. It exploits the “Isis Toolkit” (part of a larger Isis project), that aimed to detect criminals who hid behind multiple identities (mainly adults posing as children). This research built on internationally-leading work on corpus comparison techniques that used statistical natural language analysis to look at the conversational behaviour of the British population in the 1990s. The majority of the research utilised a semantic analysis where keywords are characterised based on contextual information. This created the ability to operate in the face of noisy language data and deceptive behaviour; hence enabling the Isis Toolkit to detect masquerading tactics with a high degree of accuracy.

‘The Isis Toolkit can detect, with an accuracy of 94%, when an adult is masquerading as a child compared to children participating in controlled experiments.’

IMPACTS

This highly interdisciplinary research (combining computer and behavioural science with linguistics) has led to collaboration between departments within the university and a wide range of other communities outside academia. This work has delivered impact in four different areas:

Commercialisation

The Isis Toolkit has been licenced by spin-out company, Relative Insight, where it has been successfully incorporated within broader security and child protection commercial offerings. The generalisation of the technology and the scaling up of the analysis capabilities via a cloud-enabled API, now allows for it to be applied to any type of investigative activity where digital text needs to be analysed. In addition to the security domain, the company has further diversified and uses the technology to support brands and advertising agencies extract insights from digital sources in order to better understand their target consumers. Due to the success of the latter, Relative Insight has grown rapidly and now employs 13 people with clients including Havas, Saatchi & Saatchi, Ogilvy, Twitter and Microsoft Mobile.

Law Enforcement

Successful live trials with UK Police Forces, along with the Child Exploitation & Online Protection, have demonstrated the accuracy of the Isis Toolkit on real data sets while also decreasing the amount of analysis time required.

“[The Isis toolkit] provides the ability to focus analysis on specific information [and] allows investigations to be more focused and therefore potential victims of grooming or contact abuse to be identified more easily”
Quote from the evaluation of the aforementioned live trials.

The toolkit has also been licensed for use by the Canadian Royal Mounted Police who see this research as an ‘operational necessity’. Large-scale agreements with other international customers could emerge in the future.

Education

Through developing the Toolkit, researchers have created and delivered internet safety lessons to over 500 students. This has generated strong links with teachers leading to the development of comprehensive lesson plans on e-safety topics for Key Stages 2-5 that have been rolled out across the region through the South Lakes Teaching School Alliance (SLTSA). In fact, the Isis Toolkit has been deployed worldwide through the release of the free iTunes app called ChildDefence, empowering children to protect themselves online. This was built upon in 2014 when it formed the basis of one of the WeProtect projects, an online child protection initiative set up by Prime Minister David Cameron.

Internet Governance

The research led to a policy paper prepared for the Chartered Institute for IT (BCS), and presented to Alun Michael, MP in 2009. This paper subsequently was selected as the single UK contribution to the 2009 and 2010 Internet Governance Forums (in Sharm-Al-Sheikh and Vilnius respectively). It also provided written evidence to the Commons Select Committee on Education (2010), as well as contributing to the Proposal for a Directive of the European Parliament and of the Council on combating the sexual abuse, sexual exploitation of children and child pornography, repealing Framework Decision 2004/68/JHA (COM/2010/0094).

A report requested by the European Parliament’s Committee on Gender Equality used the Isis toolkit as a case study to support the use of a modified toolkit in assisting in the detection and management of cyber coercion and rape of women and girls.
In just raising awareness of online protection issues this research inevitably leads to an impact on the general public. It is crucially important in this area of online child protection to impact both public policy and to inform the general public.

AI toolkit can help police trace child sexual abuse online

London, Dec 3 (IANS) A new artificial intelligence (AI) toolkit ‘iCOP’ has been designed to identify child sexual abuse content online that can lead police to catch the abusers, researchers report.

According to the study published in the journal Digital Investigation, the ‘iCOP’ toolkit automatically identifies new or previously unknown child sexual abuse media using AI.

“With iCOP, we hope we are giving police the tools they need to catch child sexual abusers early based on what t

Warning follows report into online child sexual abuse risk

If the public are serious about wanting to protect children from online sexual abuse more investment in skilled professionals is needed now.The stark warning comes from researchers following publication of a new report commissioned by the Independent Inquiry on Child Sexual Abuse (IICSA) which coincided with the first day of the public hearing into online child sexual abuse.

Victims of online facilitated child sexual abuse often remain undetected until images or videos of their abuse are picked up by criminal investigations.
And warns Professor Corinne May-Chahal, of Lancaster University, who led this piece of research: “These involve a growing number of children but the resources needed to detect abuse, trace victims and help them get support is very limited.”She added that a key question asked by parents and those working with children is what made a child at risk of online sexual abuse.
The Lancaster report, examines what is known about the characteristics, vulnerabilities and on-and-offline behaviour of victims of online-facilitated child sexual abuse and exploitation.
Whilst any child, even younger children, could be at risk, what is less known is that risk of direct contact is equally likely to come from people the child knows as well as from strangers.
Online-facilitated CSA (OFCSA) is a growing area of concern and includes a wide range of actions and events ranging from feeling upset by viewing sexual content online to live streaming sexual acts at the request of a perpetrator. It also includes the recording of offline CSA.
Sexting is not always abusive but, if images are shared without permission or distributed into peer to peer networks, it can be.
Researchers also found:
* Most studies suggest girls are at higher risk but this may be because boys are less likely to admit to OFCSA.
* Vulnerability characteristics included:
– Adverse childhood experiences, such as physical and sexual abuse and exposure to parental conflict, made children more vulnerable to online victimisation. However, any child from any socio-economic background can fall victim to OFCSA
– Disability, particularly disabled boys who may be at equal or greater risk to girls
– Above average internet use increased vulnerability especially when interacting with other characteristics, such as having a disability or low self-esteem (at the time of the research 12-15 year olds spent on average just over 2.5 hrs per day but this is increasing all the time (now according to Ofcom 2017 they spend 21 hours a week online)
– Children exploring gender identities online may be more vulnerable
* What protects children online:
– In managing unwanted experiences, many children develop important digital skills that contribute to their overall resilience
– The more upset or distressed a child is by online-facilitated CSA the more likely they are to tell others; usually friends or parents
– Children are unlikely to tell others if they are embarrassed or afraid
IICSA launched 13 investigations into a broad range of institutions. One of the investigations focuses on the institutional responses to child sexual abuse and exploitation facilitated by the Internet.
The Lancaster University research, a ‘rapid evidence assessment’, provides an overview of the current state of evidence on a selected topic and is one of several reports published today.

Toddler hand inspired AI child sex abuse tool

The fight to rid the web of images of child abuse has gained a new tool – in the form of artificial intelligence.
The AI toolkit, inspired by photos of a toddler’s hand, can automatically detect new child sexual abuse photos and videos in online networks.
Spotting newly produced media can give law enforcement agencies the evidence they need to find and prosecute offenders, researchers said.
The system is freely available to law enforcement agencies.
It is already being used in several European countries.
The research was carried out as part of the international research project iCOP (identifying and catching originators in peer-to-peer networks), which was founded by the European Commission Safer Internet Programme.
It was carried out by researchers at Lancaster University, the German Research Centre for Artificial Intelligence and University College Cork in Ireland.
Lead researcher Claudia Peersman, from Lancaster University, explained what inspired her to develop the system.
“When I was just starting as a junior researcher interested in computational linguistics, I attended a presentation by an Interpol police officer who was arguing that the academic world should focus more on developing solutions to detect child abuse media online,” she said.
“Although he clearly acknowledged that there are other crimes that also deserve attention, at one point he said: ‘You know those sweet toddler hands with dimple-knuckles. I see them online every day’. From that moment I knew I wanted to do something to help stop this.”
Early detection
It works using a combination of file name analysis – picking up typical filenames used by paedophiles such as ch1ld. These cannot be picked up by standard computer analysis and while they are easily spotted by humans, the sheer volume of images makes it impossible for law enforcers to find every file.
The software can also identify specialised vocabulary commonly used by paedophiles and associated with images, such as Lolita, inspired by a Vladamir Nabokov novel about a middle-aged man who becomes obsessed with a young girl.
The second element of the toolkit is image analysis. The AI software can spot images of children via things such as subtle differences in skin colour compared to adults or by spotting movements associated with sexual abuse.
Hundreds of thousands of child sexual abuse images and videos are being shared every year. There are already a number of tools available to help law enforcement agents monitor peer-to-peer networks for child abuse media, but they usually rely on identifying known media.
“Identifying new child sexual abuse media is critical because it can indicate recent or ongoing child abuse,” said Ms Peersman.
“And because originators of such media can be hands-on abusers, their early detection and apprehension can safeguard victims from further abuse.”
Tests of the toolkits on real images of child sexual abuse appeared to be highly accurate, with a false positive rate of 7.9% for images and 4.3% for videos, according to the researchers.
Related Topics