Opendp - The core library of differential privacy algorithms powering the OpenDP Project.



Project Status: WIP – Initial development is in progress, but there has not yet been a stable, usable release suitable for the public. License: MIT Python ci tests

The OpenDP Library is a modular collection of statistical algorithms that adhere to the definition of differential privacy. It can be used to build applications of privacy-preserving computations, using a number of different models of privacy. OpenDP is implemented in Rust, with bindings for easy use from Python.

The architecture of the OpenDP Library is based on a conceptual framework for expressing privacy-aware computations. This framework is described in the paper A Programming Framework for OpenDP.

The OpenDP Library is part of the larger OpenDP Project, a community effort to build trustworthy, open source software tools for analysis of private data. (For simplicity in these docs, when we refer to “OpenDP,” we mean just the library, not the entire project.)


OpenDP is under development, and we expect to release new versions frequently, incorporating feedback and code contributions from the OpenDP Community. It's a work in progress, but it can already be used to build some applications and to prototype contributions that will expand its functionality. We welcome you to try it and look forward to feedback on the library! However, please be aware of the following limitations:

OpenDP, like all real-world software, has both known and unknown issues. If you intend to use OpenDP for a privacy-critical application, you should evaluate the impact of these issues on your use case.

More details can be found in the Limitations section of the User Guide.


The easiest way to install OpenDP is using pip (the package installer for Python):

$ pip install opendp

More information can be found in the Getting Started section of the User Guide.


The full documentation for OpenDP is located at Here are some helpful entry points:

Getting Help

If you're having problems using OpenDP, or want to submit feedback, please reach out! Here are some ways to contact us:


OpenDP is a community effort, and we welcome your contributions to its development! If you'd like to participate, please see the Contributing section of the Developer Guide

  • Floating-point issue in noise samplers

    Floating-point issue in noise samplers

    Dear OpenDP team,

    As suggested by @Shoeboxam, I took a look at the approach OpenDP uses to sample noise. My understanding is that it implements three main mitigations against floating-point issues:

    1. using MPFR to generate a hole-free noise distribution centered on zero,
    2. computing (noise + shift/scale)*scale instead of noise*scale + shift,
    3. and rounding in conservative directions at every step.

    None of these mitigations address the problem that when summing two floating-point numbers, the result has the precision of the least-precise summand, which creates distinguishing events. In particular, this problem occurs regardless of whether noise is added before or after scaling, so the second mitigation does not work.

    One easy way to see this is to take scale=1, which makes Mitigation 2 a no-op. The following proof of concept adds Laplace noise of scale 1 to 0 and to 1, and checks the precision level of the output.

    from opendp.trans import *
    from opendp.meas import *
    from opendp.comb import *
    from opendp.mod import enable_features
    samples = 1000
    data = "1,0"
    parse = make_split_dataframe(separator=",", col_names=["A", "B"])
    noisy_sum = (
        make_cast(TIA=str, TOA=float)
        >> make_impute_constant(0.)
        >> make_clamp(bounds=(0., 1.))
        >> make_bounded_sum(bounds=(0., 1.))
        >> make_base_laplace(scale=1.0)
    # Noisy sum, col A
    sum_a = parse >> make_select_column(key="A", TOA=str) >> noisy_sum
    # Noisy sum, col B
    sum_b = parse >> make_select_column(key="B", TOA=str) >> noisy_sum
    out_a = [sum_a(data) for _ in range(0, samples)]
    out_b = [sum_b(data) for _ in range(0, samples)]
    mul_a = sum([(o*(2.**53)).is_integer() for o in out_a])
    mul_b = sum([(o*(2.**53)).is_integer() for o in out_b])
    print(f"{mul_a}/{samples} outputs from 1 are multiples of 2^-53...")
    print(f"... but only {mul_b}/{samples} from 0 are.")

    This prints:

    1000/1000 outputs from 1 are multiples of 2^-53...
    ... but only 722/1000 from 0 are.

    One solution is to determine in advance, based on the parameters of the aggregation, which level of precision you need, and round all outputs to this precision. This is what Google's DP libraries do. Another solution is to use MPRF to generate a hole-free noise distribution centered on shift, with the right noise scale, to avoid the floating-point operations entirely.

    This issue only happens if you enable the floating-point feature, which the documentation warns about. So I'm assuming you do not consider this as a vulnerability, and posting this on GitHub as a regular issue.

    Best regards,


    opened by TedTed 8
  • ticketing system for email to info@, security@, etc.

    ticketing system for email to info@, security@, etc.

    As discussed this morning, I'm proposing that OpenDP use the IQSS/HMDC ticketing system at for tracking correspondence that shouldn't be made public. (Dataverse uses this system as well.) Because it runs software called Request Tracker, it's often referred to as "RT".

    Without a ticketing system, it's hard to know who has replied to an inquiry, what was said, if it's resolved or not, etc.

    The obvious email to convert to a ticketing system is [email protected]. (Currently, [email protected] forwards to a group under We would get in touch with the IQSS/HMDC RT team via [email protected] and ask them to set up a "queue" for OpenDP.

    While we're at it, I would suggest that now is a good time to set up the email address [email protected] so that security researchers and others can contact us privately about any security concerns. For now it's probably fine to put both "info" and "security" messages in the same RT queue but if anything thinks starting off with two queues is better, please say so.

    Finally, I'll mention that while I'm focused on [email protected] and a new address called [email protected], this spreadshet contains many more email addresses for OpenDP that may or may not be in regular use: . For now, I'd consider other email addresses out of scope.

    If this plan sounds reasonable, I just need the go ahead to get the ball rolling. I'll probably need help with changing the forwarding for [email protected].

    CATEGORY: Documentation OpenDP Core Effort 2 - Medium :cookie: 
    opened by pdurbin 6
  • Add Retries For Pulling Pypi Repo

    Add Retries For Pulling Pypi Repo

    Closes #537:

    • Creates retry command function that will retry any command, and log/report the final failure
    • ~~Captures stderr when running command~~
    • small name change nit to python version formatting
    opened by michaeleliot 5
  • Simple Attack Notebook Port

    Simple Attack Notebook Port

    We have a notebook demonstrating a simple privacy attack here, but it's in the old library:

    It just has a differencing attack; it would be good to look into other attacks as well.

    CATEGORY: Documentation good first issue OpenDP Core sn-core-deprecate 
    opened by Shoeboxam 5
  • clean up notebook example link, add Binder link #285

    clean up notebook example link, add Binder link #285

    Instead of a raw link to GitHub, I added some surrounding text so it looks nicer.

    More importantly, I also linked to Binder so people can try the notebooks from their browser.

    For now, the Binder link doesn't have a branch specified so it'll default to "main" (the default) but as discussed in #285 once we cut a release, we can specify "stable" as the branch.

    opened by pdurbin 5
  • Put docs in main repo, working API reference

    Put docs in main repo, working API reference

    Closes #148

    As part of this pull request, I went ahead and incorporated the Python docs @Shoeboxam added but rather than using his "build docs" script, I tweaked the Sphinx config file. Here's how it looks:

    Screen Shot 2021-07-07 at 3 57 37 PM

    A few things to note:

    • If you'd like to try this branch locally, make html should work fine but if you want to try make versions you'll have to go into source/ and change "main" to "148-docs" in the line that says smv_branch_whitelist = r'^main$'
    • Right now I have smv_branch_whitelist set to "main" but somehow I'd like the docs for "main" to appear at a URL for "latest". To achieve this in the docs repo, we used "latest" rather than "main" as the default branch but I'm hoping we can avoid this.
    • I'm including the GitHub Action we use in the docs repo but I changed it (and the redirect page) to "main" instead of "latest". I haven't get set up the gh-pages branch, CNAME, etc. For now the CNAME (in the GitHub Action) is until we decide we're happy with this setup of having the docs in the same repo.
    • Since I'm not using Mike's "build docs" script, I think we'll have to manually create files like opendp.v1.meas.rst when we add a new module. These are basically small index files and if this becomes a problem, we can probably figure out some automation.
    opened by pdurbin 5
  • Make clippy less unhappy

    Make clippy less unhappy

    This is less of a bug than of a tidiness issue.

    Describe the bug Running clippy (rust linter) on the rust code yields a bunch of minor warnings. None of the warnings represent real problems, but cleaning those up will make it easier to spot future problems earlier

    To Reproduce

    % cd rust
    % cargo clippy


    Looking over what clippy complains about, the fixes are easy and uncontroversial. I am happy to create a branch and submit a PR for those, but there is also a (set of) warnings from code generated from impl_exact_int_cast_int_float! in

    macro_rules! impl_exact_int_cast_int_float {
        ($int:ty, $float:ty) => (impl ExactIntCast<$int> for $float {
            fn exact_int_cast(v_int: $int) -> Fallible<Self> {
                let v_float = v_int as $float;
                if <$float>::MIN_CONSECUTIVE > v_float || <$float>::MAX_CONSECUTIVE < v_float {

    Clippy wants (see manual_range_contains) us to use .contains() for checking whether a value is within a range.

    I suspect that this is straight forward under the assumption that I am not too mistaken about when types are checked and enforced wrt to macro expansion. Indeed, the intent of the macro illustrates that I am not too mistake. But I've never played with Rust macros before. (There was a time when I wrote M4 macros for configurations, but that felt like several lifetimes ago.)

    opened by jpgoldberg 4
  • conservative rounding directions in relations

    conservative rounding directions in relations

    Closes #150 (main issue) This PR is an initial pass at making the library robust to overflow and rounding in arithmetic.

    There were two other issues that were also related to rounding: Closes #314 Closes #343

    This PR does not add relation relaxation terms for worst-case floating-point rounding in the function (#348).

    opened by Shoeboxam 4
  • Docs Update -- User Guide Sections by MS

    Docs Update -- User Guide Sections by MS

    Closes #269.

    • fixed python doctests
    • adjusted makefile for testing python doctests manually with make doctest-python
    • added python doctests to the smoke-test CI
    • added documentation files:
      • core-structures.rst
      • supporting-elements.rst
      • combinators.rst
      • transformations.rst
      • measurements.rst
      • application-structure.rst
    opened by Shoeboxam 4
  • Troubleshooting docs

    Troubleshooting docs

    Place to collect install issues to move into a or documentation at some point.

    error: AttributeError: module 'enum' has no attribute 'IntFlag' solution: pip3 uninstall -y enum34

    error: error[E0658]: const generics are unstable solution: rustup update

    CATEGORY: Documentation OpenDP Core Effort 1 - Small :coffee: 
    opened by Shoeboxam 4
  • Error Handling Strategy

    Error Handling Strategy

    Strategy for error handling in OpenDP, especially across the FFI boundary:

    • [x] Develop general approach for signaling errors from library functions.
    • [x] Define a way to expose this to FFI in safe manner.
    • [x] Write some example code for how this should be applied in the library.
    OpenDP Core Effort 1 - Small :coffee: 
    opened by andrewvyrros 4
  • Invalid options object. Dev Server has been initialized using an options object that does not match the API schema

    Invalid options object. Dev Server has been initialized using an options object that does not match the API schema

    Discussed in

    Originally posted by dotnetmaheshbabu December 28, 2022 ValidationError: Invalid options object. Dev Server has been initialized using an options object that does not match the API schema. - options has an unknown property 'progress'. These properties are valid: object { allowedHosts?, bonjour?, client?, compress?, devMiddleware?, headers?, historyApiFallback?, host?, hot?, http2?, https?, ipc?, liveReload?, magicHtml?, onAfterSetupMiddleware?, onBeforeSetupMiddleware?, onListening?, open?, port?, proxy?, server?, setupExitSignals?, setupMiddlewares?, static?, watchFiles?, webSocketServer? } - options.static should be one of these: [non-empty string | object { directory?, staticOptions?, publicPath?, serveIndex?, watch? }, ...] | boolean | non-empty string | object { directory?, staticOptions?, publicPath?, serveIndex?, watch? } -> Allows to configure options for serving static files from directory (by default 'public' directory). -> Read more at Details: * options.static has an unknown property '0'. These properties are valid: object { directory?, staticOptions?, publicPath?, serveIndex?, watch? } * options.static has an unknown property '1'. These properties are valid: object { directory?, staticOptions?, publicPath?, serveIndex?, watch? }

    opened by dotnetmaheshbabu 0
  • interactive measurements

    interactive measurements

    Based on many discussions with @andrewvyrros, @SalilVadhan, Marco Gaboardi and Michael Hay, as well as Andy's Python prototypes. Expect further changes.

    Key files for the Rust portion are:

    struct QueryableBase

    In this prototype, queryables are modeled as:

    struct QueryableBase(
        Rc<RefCell<dyn FnMut(&Self, &dyn Any) -> Fallible<Box<dyn Any>>>>

    That is, a queryable is a closure that captures mutable variables. The mutable variables are the state, and as you can see, the type of the state does not appear in the type signature. This example identifies the variables that become the state of a concurrent composition queryable.

    • Why FnMut? I prefer using FnMut over separate state and transition because it eliminates the need to unbox, downcast, unpack, repack, upcast and re-box the state. It is more ergonomic to reference mutable variables that have been closed over. FnMut also removes the representation of state from the signature completely.

    • Why Rc? The parent should survive as long as any of its descendants. The initial approach involved lifetimes, where a child may hold a reference to its parent. This meant that children are generic WRT a lifetime parameter that signifies that they may live no longer than the parent (not the ideal behavior). A second issue that surfaced with lifetimes is that Any must be 'static, so the Context query (which itself contained a reference/lifetime) must be static.

    • Why RefCell? Now that we've established why we need Rc, realize that DerefMut is not implemented for Rc, to eliminate the possibility of multiple mutable borrows. RefCell is a pattern for interior mutability that enforces at most one mutable borrow exists at a time. Because of this, expect a panic if the transition function attempts to query &Self. If we wanted to support computational concurrency, we could swap these types to an Arc<Mutex<dyn FnMut ...>> that would block instead. I chose RefCell over Cell because the FnMut is not Copy.

    FnMut Arguments

    1. &Self: Interactive DP algorithms typically need a way to communicate with the queryable that spawns it (sequential composition, odometers, etc). To make this possible, &Self (an immutable reference to itself, a QueryableBase) is passed as the first argument to the FnMut. The parent queryable uses this to pass a Context query containing a reference to the parent queryable to a child queryable. Note that it is particularly useful that the base queryable representation is untyped, because Context doesn't leak the Q and A types of the parent.

    2. &dyn Any: represents an incoming query with a dynamic type. The body of the transition function attempts to downcast this to a small number of different concrete types (like Q, Context, CheckDescendantChange).

    FnMut Return

    Fallible<Box<dyn Any>>: represents an outgoing answer or an error. The answer is dynamically typed. In order to read the contents, one must downcast the answer to the expected type.

    Notice the QueryableBase has no type parameters. This is because the type of queries and answers may vary, even on simple queryables. In addition to the type a user may expect queries to be (like Measurement<DI, DO, MI, MO>), queryables also need to be able to handle internal book-keeping queries (like registering a parent Context, or a child asking for permission to execute).

    struct Queryable<Q, A>

    End-users will only interact with Queryable<Q, A>, which handles the downcasting internally. Queryable<Q, A> is a zero-sized wrapper around QueryableBase:

    pub struct Queryable<Q, A> {
        _query: PhantomData<Q>,
        _answer: PhantomData<A>,
        base: QueryableBase,

    The eval function on Queryable<Q, A> upcasts inputs of type &Q to &dyn Any, and downcasts answers of type Box<dyn Any> to A. Interactive measurements are defined in terms of this typed wrapper of the queryable.

    opened by Shoeboxam 0
  • Cleaning up documentation

    Cleaning up documentation

    I'm working through the User Guide and finding errors. I figured I might as well clean them up.

    I do see that the .rst files have a testsetup section, but the reader will not be able to execute the code w/o the using enable_features('contrib').

    Constructors (index.rst):

    The example code will not run as written. it leads to an error + discussion. We need to enable features before using clamper:

    from opendp.mod import enable_features

    Combinators (combinators.rst)

    Same problem as above

    Fixed via

    opened by alexWhitworth 0
  • Clean up default builds

    Clean up default builds

    Doing a build with default features (without setting --features untrusted,ffi that we usually enable) has some issues. It's not critical, but it'd be nice if this were cleaned up, so you could do a cargo build on a fresh clone.

    • cargo build:
    warning: /Users/av/Repositories/opendp/rust/Cargo.toml: version requirement `0.0.0+development` for dependency `opendp_derive` includes semver metadata which will be ignored, removing the metadata is recommended to avoid confusion
    warning: /Users/av/Repositories/opendp/rust/Cargo.toml: version requirement `0.0.0+development` for dependency `opendp_tooling` includes semver metadata which will be ignored, removing the metadata is recommended to avoid confusion
    warning: /Users/av/Repositories/opendp/rust/opendp_derive/Cargo.toml: version requirement `0.0.0+development` for dependency `opendp_tooling` includes semver metadata which will be ignored, removing the metadata is recommended to avoid confusion
       Compiling opendp v0.0.0+development (/Users/av/Repositories/opendp/rust)
    warning: unused macro definition: `enclose`
       --> src/
    148 | macro_rules! enclose {
        |              ^^^^^^^
        = note: `#[warn(unused_macros)]` on by default
    warning: unused import: `std::any::Any`
      --> src/domains/
    11 | use std::any::Any;
       |     ^^^^^^^^^^^^^
       = note: `#[warn(unused_imports)]` on by default
    • cargo test: Same as above, plus:
    error[E0425]: cannot find function `make_identity` in module `transformations`
      --> src/combinators/
    57 |         transformations::make_identity(AllDomain::<T>::new(), SymmetricDistance::default()).unwrap_test()
       |                          ^^^^^^^^^^^^^ not found in `transformations`
    warning: `opendp` (lib) generated 2 warnings
    warning: unused import: `crate::error`
      --> src/combinators/
    41 |     use crate::error::*;
       |         ^^^^^^^^^^^^
    OpenDP Core Effort 1 - Small :coffee: CATEGORY: Infrastructure 
    opened by andrewvyrros 0
  • Experimental partial application API

    Experimental partial application API

    I'm experimenting with a wrapper over the OpenDP Python API. The purpose is to give end-users the feeling that they are specifying privacy parameters directly, without us needing to maintain versions of the constructors that accept privacy parameters.

    It wraps each constructor in a function that returns a partial computation chain (a PartialChain) if an argument is omitted. I omit scale here to get a PartialChain:

    partial_base_laplace = make_base_laplace()

    If you __call__ a PartialChain, you get the Measurement or Transformation out: partial_base_laplace(1.) # gives a base_laplace Measurement

    Combinators also return a PartialChain if a PartialChain is passed in:

    partial_meas = (
        make_sized_bounded_mean(size, bounds) >> 

    You can call .fix(d_in, d_out) on a PartialChain to solve for the missing parameter:

    dp_mean_meas = partial_meas.fix(d_in=2, d_out=1.)
    print(dp_mean_meas.param)  # 0.009000000000046035

    Located in the 612-partial-api branch.

    OpenDP Core CATEGORY: Project 
    opened by Shoeboxam 0
Open Differential Privacy
The NFT smart contract powering xyz on Terra

xyz NFT Contract This repository contains the core NFT smart contract that implements xyz, a base layer for metaverses on the Terra blockchain. The xy

null 16 Sep 25, 2022
Smart contracts powering Spectrum Protocol on Terra

Spectrum Core Contracts This monorepository contains the source code for the core smart contracts implementing Spectrum Protocol on the Terra blockcha

Spectrum Protocol 38 Dec 19, 2022
A collection of algorithms that can do join between two parties while preserving the privacy of keys on which the join happens

Private-ID Private-ID is a collection of algorithms to match records between two parties, while preserving the privacy of these records. We present tw

Meta Research 169 Dec 5, 2022
Enigma Core library. The domain: Trusted and Untrusted App in Rust.

Enigma Core library Service Master Develop CI Badge Pure Rust Enclave && Untrusted in Rust. Core is part of the Enigma node software stack. The Core c

SCRT Labs 89 Sep 14, 2022
Dexios-Core is a library used for managing cryptographic functions and headers that adhere to the Dexios format.

What is it? Dexios-Core is a library used for managing cryptographic functions and headers that adhere to the Dexios format. Security Dexios-Core uses

brxken 3 Jul 4, 2022
zkSync: trustless scaling and privacy engine for Ethereum

zkSync: scaling and privacy engine for Ethereum zkSync is a scaling and privacy engine for Ethereum. Its current functionality scope includes low gas

Matter Labs 1.9k Jan 1, 2023
A privacy-preserving blockchain on Substrate

Zerochain Zerochain is a generic privacy-protecting layer on top of Substrate. It provides some useful substrate modules and toolkit for protecting us

LayerX 256 Dec 26, 2022
Nym provides strong network-level privacy against sophisticated end-to-end attackers, and anonymous transactions using blinded, re-randomizable, decentralized credentials.

The Nym Privacy Platform The platform is composed of multiple Rust crates. Top-level executable binary crates include: nym-mixnode - shuffles Sphinx p

Nym 653 Dec 26, 2022
Bitcoin Push Notification Service (BPNS) allows you to receive notifications of Bitcoin transactions of your non-custodial wallets on a provider of your choice, all while respecting your privacy

Bitcoin Push Notification Service (BPNS) Description Bitcoin Push Notification Service (BPNS) allows you to receive notifications of Bitcoin transacti

BPNS 1 May 2, 2022
Onlyfans-type web service based on TOR with maximum privacy features.

onionfans Onlyfans-type web service based on TOR with maximum privacy features. Features "Vanishing" single-use feed CDN links Landing page No JavaScr

Dowland Aiello 8 Sep 14, 2022
Safeguard your financial privacy with zero-knowledge proofs.

Spinner The Spinner project ( takes a privacy first approach to protect users crypto assets. It is a layer-2 protocol built on th

Spinner 21 Dec 28, 2022
Privacy-first delay tolerant network protocol

?? Liminality ?? Liminality is a unique protocol for wireless communication in fluid and dynamic environments. The protocol takes its name from the li

Ellen Poe 10 Oct 11, 2023
A Rust Library of China's Standards of Encryption Algorithms (SM2/3/4)

Libsm Libsm is an open source pure rust library of China Cryptographic Algorithm Standards. It is completed by a collaborative effort between the Cryp

CITAHub 149 Dec 23, 2022
Daemon and tools to control your ASUS ROG laptop. Supersedes rog-core.

asusctl for ASUS ROG - Asus Linux Website asusd is a utility for Linux to control many aspects of various ASUS laptops but can also be used with non-a

Luke Jones 46 Jan 8, 2023
EXPERIMENTAL: Bitcoin Core Prometheus exporter based on User-Space, Statically Defined Tracing and eBPF.

bitcoind-observer An experimental Prometheus metric exporter for Bitcoin Core based on Userspace, Statically Defined Tracing and eBPF. This demo is ba

0xB10C 24 Nov 8, 2022
Core Rust-C FFI for Stackmate Wallet.

STACKMATE-CORE A Rust-C FFI library exposing composite functionality from rust-bitcoin & bdk; to create cross-platform descriptor wallets. Currently u

Vishal Menon 5 May 31, 2022
A rust implementation of the ABCI protocol for tendermint core

?? DEPRECATED ?? This repo has been deprecated. Development work continues as the "abci" crate of informalsystems/tendermint-rs. Please reference that

Tendermint 117 Dec 12, 2022
Aptos-core strives towards being the safest and most scalable layer one blockchain solution.

Aptos-core strives towards being the safest and most scalable layer one blockchain solution. Today, this powers the Aptos Devnet, tomorrow Mainnet in order to create universal and fair access to decentralized assets for billions of people.

Aptos Labs 4.7k Jan 6, 2023
Core contracts for Suberra Protocol.

Suberra Core Contracts Core contracts for Suberra. Contract Description product-factory Factory that handles the instantiation and creation of the pro

Suberra Protocol 4 Dec 12, 2022