Biscuit is an authentication and authorization token for microservices architectures with the following properties:
distributed authentication: any node could validate the token only with public information;
offline delegation: a new, valid token can be created from another one by attenuating its rights, by its holder, without communicating with anyone;
capabilities based: authorization in microservices should be tied to rights related to the request, instead of relying to an identity that might not make sense to the verifier;
flexible rights managements: the token uses a logic language to specify attenuation and add bounds on ambient data, it can model from small rules like expiration dates, to more flexible architectures like hierarchical roles and user delegation;
small enough to fit anywhere (cookies, etc).
Non goals
This is not a new authentication protocol. Biscuit tokens can be used as opaque tokens delivered by other systems such as OAuth.
Revocation: Biscuit generates unique revocation identifiers for each token, and can provide expiration dates as well, but revocation requires external state management (revocation lists, databases, etc) that is outside of this specification.
Currently deploying to real world use cases such as Apache Pulsar
looking for an audit of the token's design, cryptographic primitives and implementations
How to help us?
provide use cases that we can test the token on (some specific kind of caveats, auth delegation, etc)
cryptographic design audit: we need reviews of algorithms, their usage and implementation in various languages
Project organisation
SUMMARY.md: introduction to Biscuit from a user's perspective
SPECIFICATIONS.md is the description of Biscuit, its format and behaviour
DESIGN.md holds the initial ideas about what Biscuit should be
experimentations/ holds initial code examples for the crypographic schemes and caveat language. code/biscuit-poc/ contains an experimental version of Biscuit, built to explore API issues
Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.
Biscuit has been in development for 2 years now and is now used in production. Most of the initial roadmap is done (we still need to commission an audit).
So it will soon be time for a stable release and more public communication. Before that, I'd prefer that we clean things up a bit, there are design decisions that were left alone because fixing them would be breaking changes, but a 1.0 release would be the right time to take care of them (here I consider a breaking change anything that would invalidate how currently existing tokens would be validated).
This will be a meta issue for the work needed towards the major release:
vocabulary issues:
[x] #23 Consider renaming larger
[x] #24 Consider renaming In/NotIn
[x] #52 renaming atom (not a breaking change, this would mainly change the API in libraries)
[x] #54 text syntax changes (changes to the parser and printer)
[x] #63 renaming In to Contains, removing NotIn
[x] #55 Protobuf schema simplification
[x] #59 adding a version field to the Protobuf format
[x] #1 revocation identifiers
new data types. There might be more of them we could add. I'd also like to consider whether constraints could be folded in the expression type (by making boolean expressions)
[x] #51 new atom type: set
[x] #38 new atom type: expression
[x] #61 new type: boolean
[x] #62 "deny" caveats vs "success caveats"
Datalog evaluation:
[ ] #50 specifying the regular expression features
[x] #47 allowing variables in constraints (need to decide if we allow it or not)
[x] #53 explicit run time limits for the datalog engine
[ ] #58 Using the symbol table for strings too
[ ] #56 Datalog restrictions in blocks
I'll make a branch of this repo with updated test samples once I've started that work on the Rust library.
see anything else we would need?
cc @divarvel @daeMOn63 @titanous @Keruspe @KannarFr @BlackYoup @meh
Version 1.0 was released in March 2021 245ab9e. Since then, we got more experience using it, and there are still some rough edges. I discussed it a lot with @divarvel and we feel it could be improved, but that would require breaking changes, hence a 2.0 version.
Proposals
New cryptographic scheme
We're currently using aggregated signatures (pairings and VRF designs were abandoned early) over Ristretto to ensure Biscuit's security properties. The scheme is working fine, but is complex to implement, even when copying the libsodium calls list. Auditing it, for every implementation, will be a pain.
Proposal: we move to a new scheme, similar to the "challenge tokens" idea described in the design document, but simplified
It boils down to having each block signing the next block's public key, and shipping the last secret key with the token (so it can add another level). It's basically a chain of signatures like a PKI, can be done with a serie of ed25519 keys (easy to find good implementations), and it is easy to seal a token (sign the entire token with the last secret key, remove the secret key, send the token with the signature).
breaking change: the entire protobuf format changes
cf issue #73
Aggregation operations
As mentioned in #38: it would be useful to support aggregation operation in Datalog, like count, sum, min, max, average, and set operations like union and intersection.
Potential problem: in biscuit implementations, Datalog uses the bottom up, naïve approach, which could result in infinite execution through rules like this one: fact($i) <- fact($j), $i = $j + 1
This could be fixed by moving to different implementations, like the top down approach. We could also restrict the kinds of operations available, but I'm worried we would spend a lot of time finding all possible cases. Fixing it once in the engine might be better.
TODO: write issue
Symbol table
Manipulating the symbol table is messy, and it use in both the format and the datalog engine do not help in understanding it. It was introduced both to reduce the token's size (because some names and strings might be repeated) and for faster datalog execution (by interning strings, we can unify them through comparing integers instead of comparing entire strings).
Unfortunately, the separation between "symbols" and "strings" is not clear, and providing custom symbol tables is error prone.
Proposal: remove the symbol type, except for some special symbols like authority and ambient, have every string be stored in the symbol table, add more elements to the default symbol table
It would probably reduce the token size a bit, and simplify the Datalog implementation (and make it slightly faster)
breaking change: this changes the Datalog serialization, and removes the symbol type
TODO: write issue, test implementationthis changes the Datalog execution
Revocation identifiers
We currently have two kinds of revocation identifiers, unique and not unique, because unique revocation identifiers were added afterwards. We should return to unique revocation identifiers only
breaking change: older revocation identifiers will not be used anymore
Ternary and n-ary operations
Currently expressions only support unary and binary operations, we'll probably need to support larger types of operations
Scoped rules
Currently we have a concept of "privileged rules" that can generate facts with #authority and #ambient symbols. Those symbols are used in facts describing the basic rights of the token (from the root block) and the current variables from the request (date, which resource is accessed, IP address, etc).
These symbols are confusing, and currently facts and rules in blocks other than block 0 can easily mess with each other (like, block N+1 generating facts to pass checks from block N).
Proposal: rules and checks should only be able to use facts from earlier and current blocks, not future ones, and generate facts scoped to the current block. The verifier generates facts at the scope of the first block but can check facts from all scopes
Potential problems:
this will be surprising compared to other Datalog implementations
breaking change: this changes the Datalog execution
with @clementd-fretlink, we've been looking at a datalog like language to express caveats.
Here are some ideas that emerged:
we want to avoid rights reactivation that can appeared if we implemented negation. On the other hand, a limited model like datalog with constraints is interesting, because it can express limits on ambient data while keeping a simple logic
the authority field in the first block defines some facts
the ambient data defines some facts (IP address, time, http headers, etc)
the verifier provides some facts
the verifier provides some rules that can be used in caveats
caveats are queries that use rules and constraints on facts
we start from the initial set of facts (authority, ambient and verifier provided) at each caveat evaluation, to avoid new facts being kept between applications and possibly messing up validation
Current questions:
could one block define its own rules to make writing caveats a bit easier?
could one block define some facts and rules that could be reused in further tokens? (not sure I want to introduce ordering again here)
should facts have a scope, to prevent some rules generating them? (example: the ambient facts set should be locked, no new facts should appear)
To make it easier to reason about this language, I propose that we write some example facts, rules and queries in that issue.
With resource(file1), operation(read) as ambient facts, caveat1 succeeds because resource(file1) & operation(read) & right(file1, read) is true, caveat2 succeeds because the fact resource(file1) succeeds.
With resource(file1), operation(write), caveat1 fails but caveat2 succeeds.
With resource(file2), operation(read), caveat1 suceeds but caveat2 fails.
we do not specify a revocation system here, but support for a revocation id (that would allow revoking a token and all its derived tokens) could be useful
it will soon be time to define the serialization format for biscuit tokens. To sum up the current ideas:
using a binary format to make the token more compact
using symbol tables to avoid repeating symbols in the token
TAI64 for dates looks like a good idea
prefix varints to represent integers (the number of leading zeros indicates the number of bytes encoding the number) since most numbers will be very small
UTF8 for strings
add a version number to the format
the authority block should probably be separated from the other blocks
blocks carry an index that should be checked (because their order is important for the symbol table)
we might need two versions of the signature part: one that works with asymmetric crypto, one that works with symmetric crypto
beware formats that have no guarantee on field order in serialization, since we hash those fields. A workaround for this issue when appending new blocks: start from an already serialized token instead of reserializing everything
it might be better to have a format that can support nested messages. We have a wrapping structure that contains the blocks and signature, and if that one checks out, we deserialize the blocks
Do you think it would make sense to support a version/instanciation of the tokens that expose the same API & caveat language, but use a Macaroon-like, symmetric-crypto-only construction?
Requirements
Use only symmetric primitives, based on a single secret specific to a security domain.
Provide authentication and encryption of the caveat content.
Support all features of the pubkey version, aside from public verifiability; that includes offline attenuation.
Rationale
Compared to pubkey biscuits, Macaroons provide very different tradeoffs between performance and security/applicability (all verifiers needs access to the token-minting key). It could be quite useful to support a symmetric mode, for instance to support “caching” the validation (and expiration checking) of a pubkey-based credential by sending back to the user-agent a symmetric, time-limited version of it.
Having the same features and caveat language as the pubkey version supports this sort of translation between the two; in general, there should be as little difference as possible, from a user/developer's perspective, to limit cognitive overhead.
Lastly, there is a triple reason to encrypt those tokens:
(slight) increase in privacy;
implementable without overhead, compared to the Macaroon HMAC construction:
single-pass AE modes exist, that aren't noticeably slower than a MAC;
preventing developers from parsing and “checking” caveats without verifying token authenticity when the minting key isn't available; amusingly & sadly, I've seen it happen, and IMO a misuse-resistant design should prevent that sort of sillyness.
The following default symbol table is defined in the spec:
authority
ambient
resource
operation
right
time
revocation_id
Unfortunately, the rust implementation (used by python and node), and the other implementations copied from it (java, go, …) use a different table (current_time instead of time). Only the haskell implementation uses the same table as defined by the spec.
In addition to the implementations, time is used pervasively for TTL checks (in documentation, and in the TTL helpers provided by the CLI tool).
So we have two choices here:
align the spec (and the haskell impl) to what other implementations do, update the rust CLI tool, the docs, the samples
fix the rust, java, go implementations to align with the spec, considering that v2 biscuits have not been deployed in the wild yet
I think we have a small window where we can fix the implementations, before v2 tokens start to be deployed. That would also let us adapt the default symbol table to v2 use, something we forgot to do when working on v2 (authority and ambient symbols are not used anymore, for instance). I'm not sure revocation_id is useful as well, since we typically want to handle revocation outside datalog.
Another thing that would be nice, is to offset the indices of interned strings in tokens, to provide future us with more flexibility wrt default symbols.
Given the current status of v2 use, I think we have a window to ship a few improvements and avoid keeping cruft around, but if that's too late, it's perfectly ok to just align the spec with what's already done.
while the binary format and Datalog engines can handle fact names or strings with various kinds of characters, the text format should specify what it accepts. More generally, we need a grammar for the entire language, to make sure all implementations parse the same way.
For now, let'sspecify this:
fact and rule names begin with a-zA-Z, then the rest of accepted characters are a-zA-Z0-9_:
variable names begin with a $, then the rest of accepted characters area-zA-Z0-9_`
This issue tries to describe a scheme that has the potential to improve things on two fronts:
more explicit semantics wrt block scoping (see #86)
a support for something similar to Macaroons' third-party caveats
First, a bit of context and a general rephrasing of those two concerns, then a description of the proposed scheme, and finally a summary of open questions
Context
Block scoping
There is an ongoing issue wrt how blocks interact together. In biscuits < v2, all blocks were put in the same context for datalog evaluation. Facts coming from the authority block and the verifier were tagged with special #authority and #ambient symbols which allowed separating trusted facts from facts provided by other blocks. This was quite error prone (both for users and implementers, as was evidenced by a couple security issues in the reference implementation), and only protected the authority block from outside tampering. Non-authority blocks could be poisoned by following blocks.
One of the big improvements of biscuits v2 was to evaluate blocks in order: each block is evaluated completely (including checks) before moving on to the next block (and the verifier is evaluated along with the authority block). This makes sure that a block can't
be tampered with by a following block. The benefits are twofold: users are not required to manually handle #authority and #ambient symbols, and all blocks are protected, not just the authority block.
This is not completely foolproof though (as detailed in #86): for instance, a block adding a time(2099-01-01T00:00:00Z); fact will effectively nullify all following TTL checks. So it can't extend an existing biscuit, but it can prevent further attenuation in a not-so-obvious way.
One solution would be to run each block into its own scope to avoid this kind of issue, but this may be overly restrictive in some cases.
Third-party caveats
Third-party caveats are one of the biggest innovations brought by macaroons. Simply put, they allow putting a restriction in a token that will be fulfilled by a third-party service (whereas first-party caveats are directly discharged by the verifying party).
For each third-party caveat, the third-party can provide a proof that it holds through a discharge macaroon (which can itself embed more restrictions). The verifying party can trust this discharge macaroon through a secret shared with the third party, like for regular macaroons.
Having a comparable feature in biscuits could be rather useful, as it would allow distributed verification of biscuits as well.
As powerful as macaroons third-party caveats are, they are still quite complex to operate (especially since they require sending multiple macaroons along a given request, and they also require shared secrets between all parties concerned).
A nice improvement on third-party caveats would:
retain their initial properties (delegating some checks to a third party, withe the possibility that the proof is made conditional based on extra checks, possibly also delegated to another third-party)
use public key cryptography to avoid sharing secrets (one of the main improvements of biscuits over macaroons)
allow packing the proof along with the original token for convenience
(ideally) be doable with no breaking change to the biscuit wire format and API
Rephrasing
So here, I think what we really want is an explicit way to tell which blocks we can trust (instead of just all the previous blocks). And if you squint just right, a discharge macaroon could be encoded as a block (facts provide the proof, while checks provide additional checks).
Currently, we can only trust the authority block because it's the only one where the signing key is chosen and not shared.
So in the end what we need is to:
be able to trust blocks other than authority thanks to an extra signature with a chosen key
be able to declare which blocks we can trust from datalog, possibly taking advantage of public key crypto
What I'm proposing
All that's listed below has been implemented in https://github.com/biscuit-auth/biscuit-haskell/pull/30/files#
Wire format changes
add an optional signature * public key pair to non-authority blocks
add an optional scoping directive to blocks, that could be a combination of authority only | previous blocks (current default) | blocks signed with the following public key | all blocks (unsafe).
add an optional scoping directive to rules (and as such, checks and policies) that would override the optional block-level directive
(optionally) add a key table if we want to do public key interning same as we do string interning, to save space (this might make signature more complex than needed, especially if we don't want to send the whole token to the third party)
All new fields would be optional, missing values correspond to the current behaviour: the opened PR indeed passes the conformance test suite.
Keeping the current stateful execution model (interleaving checks & policies with datalog evaluation) is not possible because of dependencies (there can be cycles between block dependencies). What I'm proposing instead is to evaluate datalog in a single go, while keeping track of where a fact comes from. Each query (in rules, checks and policies) can then filter out facts it's not allowed to match on. This model supersedes the current one (either with a "only see authority" or "only see previous blocks" default).
Facts origins
we can model the facts origins as a (non-empty) set of block ids.
when a fact is declared in block n, its origin is {n}
when a fact is created through a rule declared in block n, that matched facts f1..fx, its origin is {n} U origin(fact1) … U origin(fact n)
when the same fact is declared in multiple blocks, we can't directly merge them, so we'd need to either keep facts grouped by origin, or to model origin as a set of sets
For completeness, I've kept the first version collapsed in a details tag.
What I'm proposing
Wire format changes
add an optional signature * public key pair to non-authority blocks
add an optional scoping directive to blocks, that could be authority only | previous blocks (current default) | blocks signed with the following public key.
(optionally) add a key table if we want to do public key interning same as we do string interning, to save space (this might make signature more complex than needed, especially if we don't want to send the whole token to the third party)
Both fields would be optional, missing values correspond to the current behaviour.
Datalog syntax change
Add a datalog directive that declares the block scope.
Datalog evaluation change
This change would require a pre-processing step where each block is tagged with its optional verifying key, so that the datalog context can be populated accordingly before running rules and evaluating checks.
Open questions
Block scoping or rule / check scoping
While the scoping could be applied to individual rules / checks, I feel this would be quite verbose to express this on a datalog-element basis. It would also make the size blow up unless we use key interning
Block selecting semantics
For individual blocks, the facts grouping semantics are clear, but for transitive dependencies, it's less so. Let's say that I have a block A that accepts facts from block B (because it's signed by a trusted third-party), but block B is later in the chain and is configured to see all facts from following blocks. This would make block A see facts from blocks located between A and B, even those not signed.
One solution would be to forbid completely transitive dependencies when depending on blocks located later in the chain, but that can be tricky to implement.
In biscuit v2.0, blocks are interpreted in order. This means that a block cannot pollute previous blocks with its facts, but can have its facts accessed by the following blocks.
With this system, it's not possible to make a token capabilities grow, but it can be used to prevent further restriction of a token capabilities, in a way that's not immediately obvious.
Consider this small example: given an open token, adding time(9999-01-01T00:00:00Z);.
This will not affect already existing TTL checks, but will effectively prevent further TTL checks from working as expected. This is especially bad in contexts where checks are added automatically, since it would be possible to erroneously think that a newly added check works as expected.
Possible solutions
completely prevent facts and rules from appearing in blocks that are not the authority block (this is an existing option in the haskell lib, for instance)
execute each block in isolation so that facts defined in a block are not visible anywhere else
I'm not satisfied with the current situation since it prevents from making any assumption about block checks without inspecting the whole token. Solution 1. is both underwhelming (it would be opt-in) and overwhelming (defining facts in a block would be useful for complex checks). Solution 2. sounds good to me, but would prevent some use cases, but depending on a previous block's check seems dangerous in any case, since only the authority block can be trusted.
the fact scopes have to be transmitted if we want to replay an authorizers behaviour. AuthorizerPolicies can be kept as a way to share plicies to bootstrap the authorizer
Currently, datetime values only support comparison operations, which is useful for TTL checks.
Adding support for structured access to datetime components would be useful for other kinds of checks (eg. forbidding access during weekends and outside office hours).
In order to keep the API changes small, i think getting inspiration from postgres timestamp functions would be good, namely:
date_trunc for truncating a date based on a specified limit (eg days, months, seconds), extract for projecting a timestamp on specific fields (day in year, day in week, hours, minutes, …).
Going this way has the benefit of:
not growing the API surface much (that would be adding two binary operations)
not requiring to introduce new types (eg TimeOfDay)
Here's how it could look like:
check if time($time), [1,2,3,4,5].contains($time.extract("dow")),
$time.extract("hour") >= 9,
$time.extract("hour") <= 18;
Time zones and offsets
Currently, date time values are represented as an unsigned 64 bit timestamp. It doesn't carry any offset information. While the RFC339 input syntax allows specifying an offset (not a timezone), this information is not carried in the AST (and in the protobuf representation): the corresponding UTC timestamp is computed. As long as we're only comparing date time values, not caring about the offset is okay (comparing corresponding UTC timestamps yields the correct answers).
Extracting date fields, however, requires caring about offsets: the local year, month, day, hour and minute depends on the offset.
Timezones
Time zones depend on an external database and tz info is machine-dependent, so it would not be handled
Solutions
There are multiple solutions for that:
tracking the time offset in the token representation (and in the AST): that's the most expressive solution, but it raises not-trivial questions regarding interop with older tokens, especially wrt the behaviour of date literals containing non-zero offsets.
introducing an .at_time_offset() method that shifts the timestamp by the corresponding offset, so that subsequent calls to .date_extract() work as expected, without modifying the AST. This strikes me as dangerous, as it would mean that $date.at_time_offset("+02:00") == $date would be false, even though changing the offset should not affect the instant itself
Introduce a .at_time_offset() method that registers an offset, along the original timestamp, that can be used by a subsequent call to .date_extract(). This solves the problems of 2, at the cost of making the AST a bit more complex.
make .date_extract() take an extra parameter, allowing to specify the offset (default would be Z)
Do nothing. Local time can be provided through authorizer facts, allowing the authorizer to use time zone data.
Solutions 1 and 2 are not acceptable to me, since they create ambiguity (even though solution 1 would be best as a greenfield one). I favour solution 4 over 3, as it does not require changing the AST, keeps the "every timestamp is UTC" current behaviour, and keeps offsets contained in the scope of date_extract (both visually and in the AST). A method taking two arguments would require a Ternary operator and associated machinery, but that should not be too much work. Another solution would be to bundle both the date component and the offset in a single string, but that would be a dirty hack.
Solution 5 is also a strong contender, since dealing with offsets only and not actual timezones may make the whole feature set irrelevant.
there's a way to serialize the authorizer's content, but its usage is not clear. It can be employed both to preload an authorizer with existing data, and to dump an authorizer's content. With scopes coming in 3rd party block, we need to review this feature because it complicates the storage format
https://www.biscuitsec.org/docs/getting-started/introduction/ says "Biscuit is a bearer token" which might be taken to mean it has some of the shortcomings of bearer tokens: in particular that anyone that sees the token can reuse it.
It seems like that weakness can be quite significantly mitigated by attenuating the token so that it's valid for only a single request to whichever API, as you discuss in Per request attenuation.
The concrete suggestion is basically to mention this in the introduction.
One of the first sentence of your documentation say :
One of those building blocks is an authorization token that is signed with public key cryptography (like JWT), so that any service knowing the public key can verify the token.
This implies to distribute public keys on all services that have to verify tokens, manage key renewal. Revocation, ...
Is it in your plans to include DID based signature or encapsulate the biscuit in a verifiable token ?
This cloud solve traditional PKI problems by using a DPKI based identity / signature management.
we already have union and intersection for sets, it would make sense to have the difference operation too.
And maybe add and remove too (they can be implemented using union and difference, but it would make them easier to use with dedicated calls)
opened by Geal 0
Owner
Clever Cloud
Industrialization & automation of your software factory. Empowering developers since 2010.