November 19, 2013
Federico Olmedo
The “verified security” methodology is an emerging approach to build high assurance proofs about security properties of computer systems. Computer systems are modeled as probabilistic programs and one relies on rigorous program semantics techniques to prove that they comply with a given security goal. In particular, it advocates the use of interactive or automated theorem provers to build fully formal machine-checked versions of these security proofs.
The verified security methodology has proved successful in modeling and reasoning about several standard security notions in the area of cryptography. However, it has fallen short of covering an important class of approximate, quantitative security notions. This class comprises prominent notions such as indifferentiability, which enables securely replacing an idealized component of system with a concrete implementation, and differential privacy, a notion of privacy-preserving data mining that has received a great deal of attention in the last few years.
In this talk I will present the advances that I have made in my PhD thesis for incorporating the above mentioned class of security notions into the scope of the verifiable security. In particular I will present a quantitative version of a probabilistic relational Hoare logic and demonstrate its utility in doing so.