delegated, decentralized, capabilities based authorization token

Overview

Biscuit authentication/authorization token

Join the chat at https://gitter.im/CleverCloud/biscuit

Goals

Biscuit is an authentication and authorization token for microservices architectures with the following properties:

  • distributed authentication: any node could validate the token only with public information;
  • offline delegation: a new, valid token can be created from another one by attenuating its rights, by its holder, without communicating with anyone;
  • capabilities based: authorization in microservices should be tied to rights related to the request, instead of relying to an identity that might not make sense to the verifier;
  • flexible rights managements: the token uses a logic language to specify attenuation and add bounds on ambient data, it can model from small rules like expiration dates, to more flexible architectures like hierarchical roles and user delegation;
  • small enough to fit anywhere (cookies, etc).

Non goals

  • This is not a new authentication protocol. Biscuit tokens can be used as opaque tokens delivered by other systems such as OAuth.
  • Revocation: Biscuit generates unique revocation identifiers for each token, and can provide expiration dates as well, but revocation requires external state management (revocation lists, databases, etc) that is outside of this specification.

Roadmap

You can follow the next steps on the roadmap.

Current status:

  • the credential language, cryptographic primitives and serialization format are done
  • we have implementations in Rust, Java, Go and Web Assembly (based on the Rust version)
  • Currently deploying to real world use cases such as Apache Pulsar at Clever Cloud
  • looking for an audit of the token's design, cryptographic primitives and implementations

How to help us?

  • provide use cases that we can test the token on (some specific kind of caveats, auth delegation, etc)
  • cryptographic design audit: we need reviews of algorithms, their usage and implementation in various languages

Project organisation

  • SUMMARY.md: introduction to Biscuit from a user's perspective
  • SPECIFICATIONS.md is the description of Biscuit, its format and behaviour
  • DESIGN.md holds the initial ideas about what Biscuit should be
  • experimentations/ holds initial code examples for the crypographic schemes and caveat language. code/biscuit-poc/ contains an experimental version of Biscuit, built to explore API issues

License

Licensed under Apache License, Version 2.0, (LICENSE-APACHE or http://www.apache.org/licenses/LICENSE-2.0)

logo by Mathias Adam

originally created at Clever Cloud

Contribution

Unless you explicitly state otherwise, any contribution intentionally submitted for inclusion in the work by you, as defined in the Apache-2.0 license, shall be licensed as above, without any additional terms or conditions.

Comments
  • 1.0 release

    1.0 release

    Biscuit has been in development for 2 years now and is now used in production. Most of the initial roadmap is done (we still need to commission an audit).

    So it will soon be time for a stable release and more public communication. Before that, I'd prefer that we clean things up a bit, there are design decisions that were left alone because fixing them would be breaking changes, but a 1.0 release would be the right time to take care of them (here I consider a breaking change anything that would invalidate how currently existing tokens would be validated).

    This will be a meta issue for the work needed towards the major release:

    • vocabulary issues:
      • [x] #23 Consider renaming larger
      • [x] #24 Consider renaming In/NotIn
      • [x] #52 renaming atom (not a breaking change, this would mainly change the API in libraries)
    • [x] #54 text syntax changes (changes to the parser and printer)
    • [x] #63 renaming In to Contains, removing NotIn
    • [x] #55 Protobuf schema simplification
    • [x] #59 adding a version field to the Protobuf format
    • [x] #1 revocation identifiers
    • new data types. There might be more of them we could add. I'd also like to consider whether constraints could be folded in the expression type (by making boolean expressions)
      • [x] #51 new atom type: set
      • [x] #38 new atom type: expression
      • [x] #61 new type: boolean
      • [x] #62 "deny" caveats vs "success caveats"
    • Datalog evaluation:
      • [ ] #50 specifying the regular expression features
      • [x] #47 allowing variables in constraints (need to decide if we allow it or not)
      • [x] #53 explicit run time limits for the datalog engine
      • [ ] #58 Using the symbol table for strings too
      • [ ] #56 Datalog restrictions in blocks

    I'll make a branch of this repo with updated test samples once I've started that work on the Rust library.

    see anything else we would need? cc @divarvel @daeMOn63 @titanous @Keruspe @KannarFr @BlackYoup @meh

    opened by Geal 17
  • Biscuit 2.0

    Biscuit 2.0

    Version 1.0 was released in March 2021 245ab9e. Since then, we got more experience using it, and there are still some rough edges. I discussed it a lot with @divarvel and we feel it could be improved, but that would require breaking changes, hence a 2.0 version.

    Proposals

    New cryptographic scheme

    We're currently using aggregated signatures (pairings and VRF designs were abandoned early) over Ristretto to ensure Biscuit's security properties. The scheme is working fine, but is complex to implement, even when copying the libsodium calls list. Auditing it, for every implementation, will be a pain.

    Proposal: we move to a new scheme, similar to the "challenge tokens" idea described in the design document, but simplified

    It boils down to having each block signing the next block's public key, and shipping the last secret key with the token (so it can add another level). It's basically a chain of signatures like a PKI, can be done with a serie of ed25519 keys (easy to find good implementations), and it is easy to seal a token (sign the entire token with the last secret key, remove the secret key, send the token with the signature).

    breaking change: the entire protobuf format changes

    cf issue #73

    Aggregation operations

    As mentioned in #38: it would be useful to support aggregation operation in Datalog, like count, sum, min, max, average, and set operations like union and intersection.

    Potential problem: in biscuit implementations, Datalog uses the bottom up, naïve approach, which could result in infinite execution through rules like this one: fact($i) <- fact($j), $i = $j + 1

    This could be fixed by moving to different implementations, like the top down approach. We could also restrict the kinds of operations available, but I'm worried we would spend a lot of time finding all possible cases. Fixing it once in the engine might be better.

    TODO: write issue

    Symbol table

    Manipulating the symbol table is messy, and it use in both the format and the datalog engine do not help in understanding it. It was introduced both to reduce the token's size (because some names and strings might be repeated) and for faster datalog execution (by interning strings, we can unify them through comparing integers instead of comparing entire strings). Unfortunately, the separation between "symbols" and "strings" is not clear, and providing custom symbol tables is error prone.

    Proposal: remove the symbol type, except for some special symbols like authority and ambient, have every string be stored in the symbol table, add more elements to the default symbol table

    It would probably reduce the token size a bit, and simplify the Datalog implementation (and make it slightly faster)

    breaking change: this changes the Datalog serialization, and removes the symbol type

    TODO: write issue, test implementationthis changes the Datalog execution

    Revocation identifiers

    We currently have two kinds of revocation identifiers, unique and not unique, because unique revocation identifiers were added afterwards. We should return to unique revocation identifiers only

    breaking change: older revocation identifiers will not be used anymore

    Ternary and n-ary operations

    Currently expressions only support unary and binary operations, we'll probably need to support larger types of operations

    Scoped rules

    Currently we have a concept of "privileged rules" that can generate facts with #authority and #ambient symbols. Those symbols are used in facts describing the basic rights of the token (from the root block) and the current variables from the request (date, which resource is accessed, IP address, etc). These symbols are confusing, and currently facts and rules in blocks other than block 0 can easily mess with each other (like, block N+1 generating facts to pass checks from block N).

    Proposal: rules and checks should only be able to use facts from earlier and current blocks, not future ones, and generate facts scoped to the current block. The verifier generates facts at the scope of the first block but can check facts from all scopes

    Potential problems:

    • this will be surprising compared to other Datalog implementations

    breaking change: this changes the Datalog execution

    cf issue #75

    opened by Geal 16
  • Example logic queries

    Example logic queries

    with @clementd-fretlink, we've been looking at a datalog like language to express caveats. Here are some ideas that emerged:

    • we want to avoid rights reactivation that can appeared if we implemented negation. On the other hand, a limited model like datalog with constraints is interesting, because it can express limits on ambient data while keeping a simple logic
    • the authority field in the first block defines some facts
    • the ambient data defines some facts (IP address, time, http headers, etc)
    • the verifier provides some facts
    • the verifier provides some rules that can be used in caveats
    • caveats are queries that use rules and constraints on facts
    • we start from the initial set of facts (authority, ambient and verifier provided) at each caveat evaluation, to avoid new facts being kept between applications and possibly messing up validation

    Current questions:

    • could one block define its own rules to make writing caveats a bit easier?
    • could one block define some facts and rules that could be reused in further tokens? (not sure I want to introduce ordering again here)
    • should facts have a scope, to prevent some rules generating them? (example: the ambient facts set should be locked, no new facts should appear)

    To make it easier to reason about this language, I propose that we write some example facts, rules and queries in that issue.

    First example:

    authority=[right(file1, read), right(file2, read), right(file1, write)]
    ----------
    caveat1 = resource(X) & operation(read) & right(X, read)  // restrict to read operations
    ----------
    caveat2 = resource(file1)  // restrict to file1 resource
    

    With resource(file1), operation(read) as ambient facts, caveat1 succeeds because resource(file1) & operation(read) & right(file1, read) is true, caveat2 succeeds because the fact resource(file1) succeeds.

    With resource(file1), operation(write), caveat1 fails but caveat2 succeeds. With resource(file2), operation(read), caveat1 suceeds but caveat2 fails.

    opened by Geal 13
  • adding a revocation id in the common keys?

    adding a revocation id in the common keys?

    we do not specify a revocation system here, but support for a revocation id (that would allow revoking a token and all its derived tokens) could be useful

    opened by Geal 12
  • supporting expressions

    supporting expressions

    do we need operations like these:

    • integers: + - * / %
    • strings: concat
    • aggregates: count, sum, min, max, average

    they would not be too hard to implement, but integrating them to the syntax might be complex

    opened by Geal 10
  • Serialization format

    Serialization format

    it will soon be time to define the serialization format for biscuit tokens. To sum up the current ideas:

    • using a binary format to make the token more compact
    • using symbol tables to avoid repeating symbols in the token
    • TAI64 for dates looks like a good idea
    • prefix varints to represent integers (the number of leading zeros indicates the number of bytes encoding the number) since most numbers will be very small
    • UTF8 for strings
    • add a version number to the format
    • the authority block should probably be separated from the other blocks
    • blocks carry an index that should be checked (because their order is important for the symbol table)
    • we might need two versions of the signature part: one that works with asymmetric crypto, one that works with symmetric crypto
    • beware formats that have no guarantee on field order in serialization, since we hash those fields. A workaround for this issue when appending new blocks: start from an already serialized token instead of reserializing everything
    • it might be better to have a format that can support nested messages. We have a wrapping structure that contains the blocks and signature, and if that one checks out, we deserialize the blocks
    opened by Geal 10
  • Add support for a Macaroon-like, symmetric contruction

    Add support for a Macaroon-like, symmetric contruction

    Do you think it would make sense to support a version/instanciation of the tokens that expose the same API & caveat language, but use a Macaroon-like, symmetric-crypto-only construction?

    Requirements

    • Use only symmetric primitives, based on a single secret specific to a security domain.
    • Provide authentication and encryption of the caveat content.
    • Support all features of the pubkey version, aside from public verifiability; that includes offline attenuation.

    Rationale

    Compared to pubkey biscuits, Macaroons provide very different tradeoffs between performance and security/applicability (all verifiers needs access to the token-minting key). It could be quite useful to support a symmetric mode, for instance to support “caching” the validation (and expiration checking) of a pubkey-based credential by sending back to the user-agent a symmetric, time-limited version of it.

    Having the same features and caveat language as the pubkey version supports this sort of translation between the two; in general, there should be as little difference as possible, from a user/developer's perspective, to limit cognitive overhead.

    Lastly, there is a triple reason to encrypt those tokens:

    • (slight) increase in privacy;
    • implementable without overhead, compared to the Macaroon HMAC construction: single-pass AE modes exist, that aren't noticeably slower than a MAC;
    • preventing developers from parsing and “checking” caveats without verifying token authenticity when the minting key isn't available; amusingly & sadly, I've seen it happen, and IMO a misuse-resistant design should prevent that sort of sillyness.
    opened by KellerFuchs 10
  • Update default symbol table as last edit to `v2`

    Update default symbol table as last edit to `v2`

    tl;dr:

    Set the default symbol table to:

    • read
    • write
    • resource
    • operation
    • right
    • time
    • role
    • owner
    • tenant
    • namespace
    • user
    • team
    • service
    • admin
    • email
    • group
    • member
    • ip_address
    • client
    • client_ip
    • domain
    • path
    • version
    • cluster
    • node
    • hostname
    • nonce

    Context

    The following default symbol table is defined in the spec:

    • authority
    • ambient
    • resource
    • operation
    • right
    • time
    • revocation_id

    Unfortunately, the rust implementation (used by python and node), and the other implementations copied from it (java, go, …) use a different table (current_time instead of time). Only the haskell implementation uses the same table as defined by the spec. In addition to the implementations, time is used pervasively for TTL checks (in documentation, and in the TTL helpers provided by the CLI tool).

    So we have two choices here:

    • align the spec (and the haskell impl) to what other implementations do, update the rust CLI tool, the docs, the samples
    • fix the rust, java, go implementations to align with the spec, considering that v2 biscuits have not been deployed in the wild yet

    I think we have a small window where we can fix the implementations, before v2 tokens start to be deployed. That would also let us adapt the default symbol table to v2 use, something we forgot to do when working on v2 (authority and ambient symbols are not used anymore, for instance). I'm not sure revocation_id is useful as well, since we typically want to handle revocation outside datalog.

    Another thing that would be nice, is to offset the indices of interned strings in tokens, to provide future us with more flexibility wrt default symbols.

    Given the current status of v2 use, I think we have a window to ship a few improvements and avoid keeping cruft around, but if that's too late, it's perfectly ok to just align the spec with what's already done.

    Related PRs

    • https://github.com/biscuit-auth/biscuit-go/pull/86
    • https://github.com/biscuit-auth/biscuit-rust/pull/58
    • https://github.com/CleverCloud/biscuit-java/pull/38
    opened by clementd-fretlink 8
  • EBNF grammar for the text format

    EBNF grammar for the text format

    while the binary format and Datalog engines can handle fact names or strings with various kinds of characters, the text format should specify what it accepts. More generally, we need a grammar for the entire language, to make sure all implementations parse the same way.

    For now, let'sspecify this:

    • fact and rule names begin with a-zA-Z, then the rest of accepted characters are a-zA-Z0-9_:
    • variable names begin with a $, then the rest of accepted characters area-zA-Z0-9_`
    • strings contain UTF-8 characters, without BOM
    opened by Geal 8
  • Block scoping and third-party caveats

    Block scoping and third-party caveats

    This issue tries to describe a scheme that has the potential to improve things on two fronts:

    • more explicit semantics wrt block scoping (see #86)
    • a support for something similar to Macaroons' third-party caveats

    First, a bit of context and a general rephrasing of those two concerns, then a description of the proposed scheme, and finally a summary of open questions

    Context

    Block scoping

    There is an ongoing issue wrt how blocks interact together. In biscuits < v2, all blocks were put in the same context for datalog evaluation. Facts coming from the authority block and the verifier were tagged with special #authority and #ambient symbols which allowed separating trusted facts from facts provided by other blocks. This was quite error prone (both for users and implementers, as was evidenced by a couple security issues in the reference implementation), and only protected the authority block from outside tampering. Non-authority blocks could be poisoned by following blocks.

    One of the big improvements of biscuits v2 was to evaluate blocks in order: each block is evaluated completely (including checks) before moving on to the next block (and the verifier is evaluated along with the authority block). This makes sure that a block can't be tampered with by a following block. The benefits are twofold: users are not required to manually handle #authority and #ambient symbols, and all blocks are protected, not just the authority block.

    This is not completely foolproof though (as detailed in #86): for instance, a block adding a time(2099-01-01T00:00:00Z); fact will effectively nullify all following TTL checks. So it can't extend an existing biscuit, but it can prevent further attenuation in a not-so-obvious way.

    One solution would be to run each block into its own scope to avoid this kind of issue, but this may be overly restrictive in some cases.

    Third-party caveats

    Third-party caveats are one of the biggest innovations brought by macaroons. Simply put, they allow putting a restriction in a token that will be fulfilled by a third-party service (whereas first-party caveats are directly discharged by the verifying party).

    For each third-party caveat, the third-party can provide a proof that it holds through a discharge macaroon (which can itself embed more restrictions). The verifying party can trust this discharge macaroon through a secret shared with the third party, like for regular macaroons.

    Having a comparable feature in biscuits could be rather useful, as it would allow distributed verification of biscuits as well. As powerful as macaroons third-party caveats are, they are still quite complex to operate (especially since they require sending multiple macaroons along a given request, and they also require shared secrets between all parties concerned).

    A nice improvement on third-party caveats would:

    • retain their initial properties (delegating some checks to a third party, withe the possibility that the proof is made conditional based on extra checks, possibly also delegated to another third-party)
    • use public key cryptography to avoid sharing secrets (one of the main improvements of biscuits over macaroons)
    • allow packing the proof along with the original token for convenience
    • (ideally) be doable with no breaking change to the biscuit wire format and API

    Rephrasing

    So here, I think what we really want is an explicit way to tell which blocks we can trust (instead of just all the previous blocks). And if you squint just right, a discharge macaroon could be encoded as a block (facts provide the proof, while checks provide additional checks).

    Currently, we can only trust the authority block because it's the only one where the signing key is chosen and not shared.

    So in the end what we need is to:

    • be able to trust blocks other than authority thanks to an extra signature with a chosen key
    • be able to declare which blocks we can trust from datalog, possibly taking advantage of public key crypto

    What I'm proposing

    All that's listed below has been implemented in https://github.com/biscuit-auth/biscuit-haskell/pull/30/files#

    Wire format changes

    • add an optional signature * public key pair to non-authority blocks
    • add an optional scoping directive to blocks, that could be a combination of authority only | previous blocks (current default) | blocks signed with the following public key | all blocks (unsafe).
    • add an optional scoping directive to rules (and as such, checks and policies) that would override the optional block-level directive
    • (optionally) add a key table if we want to do public key interning same as we do string interning, to save space (this might make signature more complex than needed, especially if we don't want to send the whole token to the third party)

    All new fields would be optional, missing values correspond to the current behaviour: the opened PR indeed passes the conformance test suite.

    Datalog syntax change

    // block-level scoping
    trusting previous;
    user(1234);
    // rule scoping, combining 
    is_allowed($user, $p) <- user($user), operation("read"), has_external_property($p) trusting ed25519/hex:{public key bytes}, authority;
    
    check if time($t), $t < {expiration} or
                  no_ttl("bypass") trusting authority;
    

    Here's the adapted grammar change

    <rule_body> ::= <rule_body_element> <sp>? ("," <sp>? <rule_body_element> <sp>?)* " " <origin_clause>?
    <origin_clause> ::= "trusting " ("any" | <origin_element> <sp>? ("," <sp>? <orgin_elemen> <sp>?)*)
    <origin_element> ::= "authority" | "previous" | <signature_alg>  "/" <bytes>
    <signature_alg> ::= "ed25519"
    
    <block> ::= (<origin_clause> ";")? (<block_element> | <comment> )*
    

    Datalog evaluation change

    Keeping the current stateful execution model (interleaving checks & policies with datalog evaluation) is not possible because of dependencies (there can be cycles between block dependencies). What I'm proposing instead is to evaluate datalog in a single go, while keeping track of where a fact comes from. Each query (in rules, checks and policies) can then filter out facts it's not allowed to match on. This model supersedes the current one (either with a "only see authority" or "only see previous blocks" default).

    Facts origins

    we can model the facts origins as a (non-empty) set of block ids.

    • when a fact is declared in block n, its origin is {n}
    • when a fact is created through a rule declared in block n, that matched facts f1..fx, its origin is {n} U origin(fact1) … U origin(fact n)
    • when the same fact is declared in multiple blocks, we can't directly merge them, so we'd need to either keep facts grouped by origin, or to model origin as a set of sets

    For completeness, I've kept the first version collapsed in a details tag.

    What I'm proposing

    Wire format changes

    • add an optional signature * public key pair to non-authority blocks
    • add an optional scoping directive to blocks, that could be authority only | previous blocks (current default) | blocks signed with the following public key.
    • (optionally) add a key table if we want to do public key interning same as we do string interning, to save space (this might make signature more complex than needed, especially if we don't want to send the whole token to the third party)

    Both fields would be optional, missing values correspond to the current behaviour.

    Datalog syntax change

    Add a datalog directive that declares the block scope.

    Datalog evaluation change

    This change would require a pre-processing step where each block is tagged with its optional verifying key, so that the datalog context can be populated accordingly before running rules and evaluating checks.

    Open questions

    Block scoping or rule / check scoping

    While the scoping could be applied to individual rules / checks, I feel this would be quite verbose to express this on a datalog-element basis. It would also make the size blow up unless we use key interning

    Block selecting semantics

    For individual blocks, the facts grouping semantics are clear, but for transitive dependencies, it's less so. Let's say that I have a block A that accepts facts from block B (because it's signed by a trusted third-party), but block B is later in the chain and is configured to see all facts from following blocks. This would make block A see facts from blocks located between A and B, even those not signed.

    One solution would be to forbid completely transitive dependencies when depending on blocks located later in the chain, but that can be tricky to implement.

    opened by divarvel 7
  • Isolated block execution

    Isolated block execution

    In biscuit v2.0, blocks are interpreted in order. This means that a block cannot pollute previous blocks with its facts, but can have its facts accessed by the following blocks.

    With this system, it's not possible to make a token capabilities grow, but it can be used to prevent further restriction of a token capabilities, in a way that's not immediately obvious.

    Consider this small example: given an open token, adding time(9999-01-01T00:00:00Z);. This will not affect already existing TTL checks, but will effectively prevent further TTL checks from working as expected. This is especially bad in contexts where checks are added automatically, since it would be possible to erroneously think that a newly added check works as expected.

    Possible solutions

    1. completely prevent facts and rules from appearing in blocks that are not the authority block (this is an existing option in the haskell lib, for instance)
    2. execute each block in isolation so that facts defined in a block are not visible anywhere else

    I'm not satisfied with the current situation since it prevents from making any assumption about block checks without inspecting the whole token. Solution 1. is both underwhelming (it would be opt-in) and overwhelming (defining facts in a block would be useful for complex checks). Solution 2. sounds good to me, but would prevent some use cases, but depending on a previous block's check seems dangerous in any case, since only the authority block can be trusted.

    opened by divarvel 7
  • add a new message format for authorizer snapshots

    add a new message format for authorizer snapshots

    the fact scopes have to be transmitted if we want to replay an authorizers behaviour. AuthorizerPolicies can be kept as a way to share plicies to bootstrap the authorizer

    opened by Geal 2
  • Date & time manipulation

    Date & time manipulation

    Currently, datetime values only support comparison operations, which is useful for TTL checks.

    Adding support for structured access to datetime components would be useful for other kinds of checks (eg. forbidding access during weekends and outside office hours).

    In order to keep the API changes small, i think getting inspiration from postgres timestamp functions would be good, namely: date_trunc for truncating a date based on a specified limit (eg days, months, seconds), extract for projecting a timestamp on specific fields (day in year, day in week, hours, minutes, …).

    Going this way has the benefit of:

    • not growing the API surface much (that would be adding two binary operations)
    • not requiring to introduce new types (eg TimeOfDay)

    Here's how it could look like:

    check if time($time), [1,2,3,4,5].contains($time.extract("dow")),
                          $time.extract("hour") >= 9,
                          $time.extract("hour") <= 18;
    

    Time zones and offsets

    Currently, date time values are represented as an unsigned 64 bit timestamp. It doesn't carry any offset information. While the RFC339 input syntax allows specifying an offset (not a timezone), this information is not carried in the AST (and in the protobuf representation): the corresponding UTC timestamp is computed. As long as we're only comparing date time values, not caring about the offset is okay (comparing corresponding UTC timestamps yields the correct answers).

    Extracting date fields, however, requires caring about offsets: the local year, month, day, hour and minute depends on the offset.

    Timezones

    Time zones depend on an external database and tz info is machine-dependent, so it would not be handled

    Solutions

    There are multiple solutions for that:

    1. tracking the time offset in the token representation (and in the AST): that's the most expressive solution, but it raises not-trivial questions regarding interop with older tokens, especially wrt the behaviour of date literals containing non-zero offsets.
    2. introducing an .at_time_offset() method that shifts the timestamp by the corresponding offset, so that subsequent calls to .date_extract() work as expected, without modifying the AST. This strikes me as dangerous, as it would mean that $date.at_time_offset("+02:00") == $date would be false, even though changing the offset should not affect the instant itself
    3. Introduce a .at_time_offset() method that registers an offset, along the original timestamp, that can be used by a subsequent call to .date_extract(). This solves the problems of 2, at the cost of making the AST a bit more complex.
    4. make .date_extract() take an extra parameter, allowing to specify the offset (default would be Z)
    5. Do nothing. Local time can be provided through authorizer facts, allowing the authorizer to use time zone data.

    Solutions 1 and 2 are not acceptable to me, since they create ambiguity (even though solution 1 would be best as a greenfield one). I favour solution 4 over 3, as it does not require changing the AST, keeps the "every timestamp is UTC" current behaviour, and keeps offsets contained in the scope of date_extract (both visually and in the AST). A method taking two arguments would require a Ternary operator and associated machinery, but that should not be too much work. Another solution would be to bundle both the date component and the offset in a single string, but that would be a dirty hack. Solution 5 is also a strong contender, since dealing with offsets only and not actual timezones may make the whole feature set irrelevant.

    opened by divarvel 0
  • fix authorizer serialization

    fix authorizer serialization

    there's a way to serialize the authorizer's content, but its usage is not clear. It can be employed both to preload an authorizer with existing data, and to dump an authorizer's content. With scopes coming in 3rd party block, we need to review this feature because it complicates the storage format

    opened by Geal 0
  • Suggested clarification on

    Suggested clarification on "Biscuit is a bearer token"

    https://www.biscuitsec.org/docs/getting-started/introduction/ says "Biscuit is a bearer token" which might be taken to mean it has some of the shortcomings of bearer tokens: in particular that anyone that sees the token can reuse it.

    It seems like that weakness can be quite significantly mitigated by attenuating the token so that it's valid for only a single request to whichever API, as you discuss in Per request attenuation.

    The concrete suggestion is basically to mention this in the introduction.

    Two additional thoughts for what it's worth:

    1. Maybe the docs could compare it to AWS Sigv4:
    • Sigv4 seems to need a shared secret, whereas Biscuit works on an asymmetric key
    • Sigv4 signs the whole HTTP request whereas Biscuit defaults open and can be (and must be) constrained by the user?
    • Sigv4 embeds some AWS-specific concepts so can't directly be used as-is
    • Sigv4 has no concept of offline attenuation
    1. I wonder if there is or could be a pattern documented about how to make sure all request fields and the content actually are included as facts?
    opened by sourcefrog 2
  • DID / DPKI integration

    DID / DPKI integration

    Hello Biscuit team,

    One of the first sentence of your documentation say :

    One of those building blocks is an authorization token that is signed with public key cryptography (like JWT), so that any service knowing the public key can verify the token.

    This implies to distribute public keys on all services that have to verify tokens, manage key renewal. Revocation, ...

    Is it in your plans to include DID based signature or encapsulate the biscuit in a verifiable token ?

    This cloud solve traditional PKI problems by using a DPKI based identity / signature management.

    opened by BastienVigneron 3
  • other set operations

    other set operations

    we already have union and intersection for sets, it would make sense to have the difference operation too. And maybe add and remove too (they can be implemented using union and difference, but it would make them easier to use with dedicated calls)

    opened by Geal 0
Owner
Biscuit authorization tokens
null
Brave's Rust-based adblock engine

Ad Block engine in Rust Native Rust module for Adblock Plus syntax (e.g. EasyList, EasyPrivacy) filter parsing and matching. It uses a tokenisation ap

Brave Software 961 Jan 5, 2023
A utility like pkg-audit for Arch Linux. Based on Arch Security Team data.

arch-audit pkg-audit-like utility for Arch Linux. Based on data from security.archlinux.org collected by the awesome Arch Security Team. Installation

Andrea Scarpino 316 Nov 22, 2022
a grammar based feedback fuzzer

Nautilus NOTE: THIS IS AN OUTDATE REPOSITORY, THE CURRENT RELEASE IS AVAILABLE HERE. THIS REPO ONLY SERVES AS A REFERENCE FOR THE PAPER Nautilus is a

Chair for Sys­tems Se­cu­ri­ty 157 Oct 26, 2022
radare2-based decompiler and symbol executor

Radeco A radare2 based binary analysis framework consisting from the Radeco client, in ./radeco/ directory, ./radeco-lib/ - library where whole high-l

radare org 349 Dec 28, 2022
Automated property based testing for Rust (with shrinking).

quickcheck QuickCheck is a way to do property based testing using randomly generated input. This crate comes with the ability to randomly generate and

Andrew Gallant 2k Dec 27, 2022
🥸P2P gossip network for update transparency, based on pgp 🥸

apt-swarm An attempt to make a secure public p2p protocol that gossips about signed InRelease files to implement an update transparency log. Running a

null 10 Mar 4, 2023
Fast, Concurrent, Rust based Tidal-Media-Downloader implementation.

tdl tdl is a rust implementation of the Python Script Tidal-Media-Downloader. Overview tdl offers significant performance improvements over the origin

null 42 Mar 18, 2023
delegated, decentralized, capabilities based authorization token

Biscuit authentication/authorization token Goals Biscuit is an authentication and authorization token for microservices architectures with the followi

null 580 Jan 1, 2023
Tool for audit and reclaim of delegated SPL Token accounts

Usage Install prerequisites System development libraries sudo apt install libssl-dev libudev-dev pkg-config gcc Rust curl --proto '=https' --tlsv1.2

Solana Foundation 4 Jan 27, 2022
Rust Rocket MongoDB token-authorization REST API boilerplate

Rust Rocket MongoDB token-auth REST API boilerplate In this repository, you can find backend Rust rocket mongodb rest-api boilerplate with token autho

null 6 Dec 7, 2022
Dank - The Internet Computer Decentralized Bank - A collection of Open Internet Services - Including the Cycles Token (XTC)

Dank - The Internet Computer Decentralized Bank Dank is a collection of Open Internet Services for users and developers on the Internet Computer. In t

Psychedelic 56 Nov 12, 2022
Yi Token by Crate Protocol: the primitive for auto-compounding single token staking pools.

yi Yi Token by Crate Protocol: the primitive for auto-compounding single token staking pools. About Yi is a Solana primitive for building single-sided

Crate Protocol 12 Apr 7, 2022
Discover GitHub token scope permission and return you an easy interface for checking token permission before querying GitHub.

github-scopes-rs Discover GitHub token scope permission and return you an easy interface for checking token permission before querying GitHub. In many

null 8 Sep 15, 2022
rust on pebble - functional with limited capabilities

Pebble.rs Pebble.rs is a crate that allows rust to be used to develop Pebble applications. It is compatible with SDK 3.0 and is known to work on the a

Andrew Foote 44 Aug 13, 2022
Small music theory library with MIDI capabilities written in Rust

mumuse Small music theory library with MIDI capabilities written in Rust (wip). Examples Creating notes and transpositions // Declare Note from &str l

Alexis LOUIS 4 Jul 27, 2022
Habitat is open source software that creates platform-independent build artifacts and provides built-in deployment and management capabilities.

Habitat is open source software that creates platform-independent build artifacts and provides built-in deployment and management capabilities. The go

Habitat 2.4k Dec 27, 2022
A superset of PHP with extended syntax and runtime capabilities.

PXP PXP is a superset of the PHP programming language that provides an extended set of syntax rules and language features to improve developer experie

PXP 188 Jan 29, 2023
A MITM Proxy Written in Rust 🦀! Toolkit for HTTP/1, HTTP/2, and WebSockets with SSL/TLS Capabilities. Learning Project.

Man In The Middle Proxy Description Rust-based Man in the Middle proxy, an early-stage project aimed at providing visibility into network traffic. Cur

null 158 Mar 9, 2023
A cargo subcommand that extends cargo's capabilities when it comes to code generation.

cargo-px Cargo Power eXtensions Check out the announcement post to learn more about cargo-px and the problems it solves with respect to code generatio

Luca Palmieri 33 May 7, 2023
ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-text search.

ReefDB ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-

Sacha Arbonel 75 Jun 12, 2023