IMDEA Software Researchers Publish Four Papers in Top-Ranked ACM Conference on Computer and Communications Security

IMDEA Software Researchers Publish Four Papers in Top-Ranked ACM Conference on Computer and Communications Security

September 14, 2016

Four papers by IMDEA Software Institute researchers have been accepted for publication at the 23rd ACM Conference on Computer and Communications Security to be held in Vienna, Austria, at the end of October:

Dario Fiore’s work, entitled “Hash First, Argue Later: Adaptive Verifiable Computations on Outsourced Data”, is co-authored by Cédric Fournet, Markulf Kohlweiss, Olga Ohrimenko and Bryan Parno from Microsoft Research, and Esha Ghosh from Brown University. This paper proposes new cryptographic schemes that enforce third parties to perform computations correctly.

The other three papers are co-authored by IMDEA Software Institute faculty member Gilles Barthe and former faculty member Pierre-Yves Strub. The first paper, “Advanced Probabilistic Couplings for Differential Privacy”, with Noémie Fong (ENS & IMDEA Software Institute), Marco Gaboardi (University at Buffalo, SUNY), Benjamin Grégoire (Inria), and Justin Hsu (University of Pennsylvania), provides new techniques to formally verify differentially private algorithms.

On the same vein, their second paper: “Differentially Private Bayesian Programming”, with Gian Pietro Farina and Marco Gaboardi (University at Buffalo, SUNY), Emilio Jesús Gallego Arias (CRI Mines – ParisTech), Andrew D. Gordon (Microsoft Research), and Justin Hsu (University of Pennsylvania), presents novel means for writing and verifying differentially private Bayesian machine learning algorithms.

Finally, their third paper, “Strong non-interference and type-directed higher-order masking”, with Sonia Belaïd (Thales Communications & Security), Pierre-Alain Fouque (Université Rennes 1), Benjamin Grégoire (Inria), Rebecca Zucchini (Inria), and François Dupressoir (former IMDEA Software Institute member), presents a fully automated methodology to verify the probing security of masked algorithms against differential power analysis, and generate masked versions from unprotected descriptions of an algorithm.

More information at CCS 2016.