Security in medical research collaboration focus of new research

Netanel Raviv, Eugene Vorobeychik seek to allow more people to get the most value out of research

Eric Butterman 
Netanel Raviv and Eugene Vorobeychik will be working on a security method that will allow researchers to get the most value out of medical research. (iStock photo)
Netanel Raviv and Eugene Vorobeychik will be working on a security method that will allow researchers to get the most value out of medical research. (iStock photo)

Medical organizations are making breakthroughs left and right, but a major roadblock to even greater success is medical data that stays isolated. There are collaborations that are not happening or not happening as efficiently, due to fear of cybersecurity threats.

Netanel Raviv and Eugene Vorobeychik, both in the Department of Computer Science & Engineering in the McKelvey School of Engineering at Washington University in St. Louis have been awarded a three-year, $400,000 grant from the National Science Foundation to contribute to a solution.

“Organizations are hesitant to allow external users to export machine-learned models on sensitive private datasets, but it limits the analysis that can be done,” says Netanel Raviv, assistant professor of computer science & engineering. “You have studies where there are 50 participants, but there could be information out there on thousands of people more that relates, yet it isn’t made available to the people who need it. There is a huge difference in what assessments can be made with large groups of data instead of small and this could be getting in the way of many important findings for the public.”

This is where a system of sharing machine-learning (ML) models, as opposed to raw data, allows the data analysis but where there is greater safety, he says. Still, while machine-learning models can be seen as safer, they can still be misused for a data leak that can reveal whether an individual’s data was used, he adds.

Raviv and his team, including co-principal investigator Vorobeychik, professor of computer science & engineering, have four steps in their method to help prevent leaks. The first is to evaluate the models by coming up with attacks to find the weaknesses of the models. Next, when privacy violations are found they will assist developers with a solution. Third, auditing tools and privacy patching will be designed to help. The last is to develop tools to deploy techniques for the first three goals.

Raviv said he hopes this work will allow more people to get the most value out of research.

“How much is lost by having this incredibly valuable data and feeling it has to be kept away from other brilliant researchers?” he says. “The goal is that we can have collaboration replace fear.”

Click on the topics below for more stories in those areas

Back to News