Rust Kubernetes client and controller runtime

Overview

kube-rs

CI Client Capabilities Client Support Level Crates.io Rust 1.56 Discord chat

Rust client for Kubernetes in the style of a more generic client-go, a runtime abstraction inspired by controller-runtime, and a derive macro for CRDs inspired by kubebuilder.

These crates make certain assumptions about the kubernetes apimachinery + api concepts to enable generic abstractions. These abstractions allow rust reinterpretations of reflectors, informers, controllers, and custom resource interfaces, so that you can write applications easily.

Installation

Select a version of kube along with the generated k8s-openapi types corresponding for your cluster version:

[dependencies]
kube = { version = "0.64.0", features = ["runtime","derive"] }
k8s-openapi = { version = "0.13.1", default-features = false, features = ["v1_22"] }

Features are available.

We recommend turning off default-features for k8s-openapi to speed up your compilation.

Upgrading

Please check the CHANGELOG when upgrading. All crates herein are versioned and released together to guarantee compatibility before 1.0.

Usage

See the examples directory for how to use any of these crates.

Official examples:

For real world projects see ADOPTERS.

Api

The Api is what interacts with kubernetes resources, and is generic over Resource:

use k8s_openapi::api::core::v1::Pod;
let pods: Api<Pod> = Api::namespaced(client, "apps");

let p = pods.get("blog").await?;
println!("Got blog pod with containers: {:?}", p.spec.unwrap().containers);

let patch = json!({"spec": {
    "activeDeadlineSeconds": 5
}});
let pp = PatchParams::apply("my_controller");
let patched = pods.patch("blog", &pp, &Patch::Apply(patch)).await?;
assert_eq!(patched.spec.active_deadline_seconds, Some(5));

pods.delete("blog", &DeleteParams::default()).await?;

See the examples ending in _api examples for more detail.

Custom Resource Definitions

Working with custom resources uses automatic code-generation via proc_macros in kube-derive.

You need to #[derive(CustomResource)] and some #[kube(attrs..)] on a spec struct:

#[derive(CustomResource, Debug, Serialize, Deserialize, Default, Clone, JsonSchema)]
#[kube(group = "clux.dev", version = "v1", kind = "Foo", namespaced)]
pub struct FooSpec {
    name: String,
    info: String,
}

Then you can use the generated wrapper struct Foo as a kube::Resource:

let foos: Api<Foo> = Api::namespaced(client, "default");
let f = Foo::new("my-foo", FooSpec::default());
println!("foo: {:?}", f);
println!("crd: {:?}", serde_yaml::to_string(&Foo::crd()));

There are a ton of kubebuilder-like instructions that you can annotate with here. See the documentation or the crd_ prefixed examples for more.

NB: #[derive(CustomResource)] requires the derive feature enabled on kube.

Runtime

The runtime module exports the kube_runtime crate and contains higher level abstractions on top of the Api and Resource types so that you don't have to do all the watch/resourceVersion/storage book-keeping yourself.

Watchers

A low level streaming interface (similar to informers) that presents Applied, Deleted or Restarted events.

let api = Api::<Pod>::namespaced(client, "default");
let watcher = watcher(api, ListParams::default());

This now gives a continual stream of events and you do not need to care about the watch having to restart, or connections dropping.

let mut apply_events = try_flatten_applied(watcher).boxed_local();
while let Some(event) = apply_events.try_next().await? {
    println!("Applied: {}", event.name());
}

NB: the plain stream items a watcher returns are different from WatchEvent. If you are following along to "see what changed", you should flatten it with one of the utilities like try_flatten_applied or try_flatten_touched.

Reflectors

A reflector is a watcher with Store on K. It acts on all the Event exposed by watcher to ensure that the state in the Store is as accurate as possible.

::default(); let reader = store.as_reader(); let rf = reflector(store, watcher(nodes, lp)); ">
let nodes: Api<Node> = Api::namespaced(client, &namespace);
let lp = ListParams::default()
    .labels("beta.kubernetes.io/instance-type=m4.2xlarge");
let store = reflector::store::Writer::<Node>::default();
let reader = store.as_reader();
let rf = reflector(store, watcher(nodes, lp));

At this point you can listen to the reflector as if it was a watcher, but you can also query the reader at any point.

Controllers

A Controller is a reflector along with an arbitrary number of watchers that schedule events internally to send events through a reconciler:

warn!("reconcile failed: {}", Report::from(e)), } }) .await; ">
Controller::new(root_kind_api, ListParams::default())
    .owns(child_kind_api, ListParams::default())
    .run(reconcile, error_policy, context)
    .for_each(|res| async move {
        match res {
            Ok(o) => info!("reconciled {:?}", o),
            Err(e) => warn!("reconcile failed: {}", Report::from(e)),
        }
    })
    .await;

Here reconcile and error_policy refer to functions you define. The first will be called when the root or child elements change, and the second when the reconciler returns an Err.

Rustls

Kube has basic support (with caveats) for rustls as a replacement for the openssl dependency. To use this, turn off default features, and enable rustls-tls:

[dependencies]
kube = { version = "0.64.0", default-features = false, features = ["client", "rustls-tls"] }
k8s-openapi = { version = "0.13.1", default-features = false, features = ["v1_22"] }

This will pull in rustls and hyper-rustls.

musl-libc

Kube will work with distroless, scratch, and alpine (it's also possible to use alpine as a builder with some caveats).

License

Apache 2.0 licensed. See LICENSE for details.

Comments
  • Refine our logo

    Refine our logo

    Edit: We played around with logos a bit one week here together, and we are using the final product from the comment below in https://github.com/kube-rs/kube-rs/issues/570#issuecomment-882128469. But we do need to soften that up a bit to make it proper (it doesn't divide cleanly atm).

    old experiment

    had some fun trying the obvious logo concept in my head (crossing the rust gear with the kubernetes wheel):

    kube-rs-concept-blur

    now; this is a sketch, and i'm a terrible graphics person: colors are pretty random, alignment is off, edges are awful, and as such have no usable thing.

    but opening this fun issue anyway. is this a good concept? anyone have better ideas? anyone want to try to make a better version / different idea? contributions welcome!

    opened by clux 40
  • donation of kube-rs to cncf

    donation of kube-rs to cncf

    Meta-issue via https://github.com/kubernetes/org/issues/2792#issuecomment-875660972. We are applying for CNCF Sandbox status (and not for donation into kubernetes or kubernetes-client org).

    old comparison of kubernetes org pros and cons - no longer relevant

    As mention therein, I am happy for kube-rs to outlive my historically fleeting interest for things. But, moving kube-rs to cncf under the kubernetes org comes with a bunch of pros and cons.

    PROS:

    CONS:

    • ~~need kubernetes bots and automation~~ #586 (not for CNCF)
    • ~~potential need to donate codegen~~ - #606 (not for CNCF)
    • ~~Needs a CLA~~ (only for kubernetes orgs)
    • being marked as official is arguably pointless when the rust community will treat us as the official client anyway
    • Your major/minor releases will likely be tied to k8s releases - likely plays in with #508
    • Being tied to a foundation might limit our ability to stay afloat with sponsorship (linux foundation might take cuts)

    ~~Will create separate issues for the CONS under a new donation label so we can discuss specifics in a less cluttered way. This issue can be used for general points, and what we think about this.~~

    ~~A donation would follows this kubernetes process.~~ No longer true. We are following the CNCF sandbox approach now.

    Progress:

    • [x] we can make an org regardless for this and associated repos that might be easier for rust (where main 3 are admins)
    • [x] add a DCO, and slowly work towards getting to a point where we can legally license #585
    • [x] apply for CNCF sandbox
    • [x] add cncf best practice docs and processes #670
    • [x] refine our logo #570 (need svg logo before last step)
    • [x] submit pr to cncf/landscape
    • [x] cncf landscape issue for kube-rs: https://github.com/cncf/toc/issues/754

    I am happy to discuss with CNCF and attempt do most the legwork here, if needed, and if this indeed feels worth doing to the community, and for us.

    donation 
    opened by clux 30
  • resolver 1 fallback does not work in edition 2021

    resolver 1 fallback does not work in edition 2021

    Trying to figure out why this is. Have tried setting resolver = "1" to get the old behaviour, but getting the k8s-openapi compile time failure when publishing.

    might open a bug upstream :thinking:

    invalid 
    opened by clux 28
  • Silent error getting kube config in-cluster

    Silent error getting kube config in-cluster

    Hi, I'm having a really basic problem with getting kube working in cluster with my k3s cluster (v1.18.9-k3s1). I have a very simple application which is basically just the following:

    #![feature(proc_macro_hygiene, decl_macro)]
    
    extern crate kube;
    extern crate serde;
    extern crate serde_json;
    
    use kube::{ Config };
    
    #[tokio::main]
    async fn main() -> Result<(), kube::Error> {
        println!("Starting...");
    
        println!("Getting kube config...");
        let config_result = Config::from_cluster_env(); //This is breaking
    
        println!("Resolving kube config result...");
        let config = match config_result {
            Ok(config) => config,
            Err(e) => {
                println!("Error getting config {:?}", e);
                return Err(e);
            }
        };
    
        println!("Finished!");
        Ok(())
    }
    
    [dependencies]
    kube = { version = "0.43.0", default-features = false, features = ["derive", "native-tls"] }
    kube-runtime = { version = "0.43.0", default-features = false, features = [ "native-tls" ] }
    k8s-openapi = { version = "0.9.0", default-features = false, features = ["v1_18"] }
    serde =  { version = "1.0", features = ["derive"] }
    serde_derive = "1.0"
    serde_json = "1.0"
    tokio = { version = "0.2.22", features = ["full"] }
    reqwest = { version = "0.10.8", default-features = false, features = ["json", "gzip", "stream", "native-tls"] }
    

    The output I'm getting is:

    [server] Starting...
    [server] Getting kube config...
    [K8s EVENT: Pod auth-controller-6fb8f87b4d-5stf5 (ns: ponglehub)] Back-off restarting failed container
    

    I'm hoping this is something obvious in my dependencies, but am suspicious that it's a K3S compatibility issue, since I tried using rustls originally and had to switch to native openssl because the k3s in-cluster api server address is an IP address rather than a hostname...

    opened by benjamin-wright 27
  • protobuf encoding exploration

    protobuf encoding exploration

    Is it reasonable/possible for us to get protobuf encoding with generated material? This is just a bit of a ramble on potential ideas. There's no concrete plans as of writing this. Any help on this is appreciated.

    This is for the last Gold Requirement Client Capabilities Official documentation on kubernetes.io/api-concepts#protobuf

    Continuing/summarising the discussion from #127, we see conflicting uses of client-gold in other clients that do not support it, but let us assume good faith and try our best here.

    We see that the go api has protobuf codegen hints (api/types.go) like:

    // +optional
    // +patchMergeKey=name
    // +patchStrategy=merge
    EphemeralContainers []EphemeralContainer `json:"ephemeralContainers,omitempty" patchStrategy:"merge" patchMergeKey:"name" protobuf:"bytes,34,rep,name=ephemeralContainers"`
    

    whereas the (huge) swagger codegen has the equivalent json for that part of the PodSpec:

            "ephemeralContainers": {
              "description": "...",
              "items": {
                "$ref": "#/definitions/io.k8s.api.core.v1.EphemeralContainer"
              },
              "type": "array",
              "x-kubernetes-patch-merge-key": "name",
              "x-kubernetes-patch-strategy": "merge"
            },
    

    Here, the ordering 34 is missing so this is probably difficult to solve for k8s-openapi as it stands.

    However kubernetes/api does have generated.proto files (see core/v1/genreated.proto) and it has the following to say about the entry:

      // +optional
      // +patchMergeKey=name
      // +patchStrategy=merge
      repeated EphemeralContainer ephemeralContainers = 34;
    

    We could maybe load those files with prost, but AFAIKT that will create structs that conflict with the generated structs from k8s-openapi, and we rely on k8s-openapi for trait implementations. Unless there's a way to associate these structs with the ones from k8s-openapi structs of the same name, this would be hard. Sounds like another codegen project if it is possible.

    On the other hand, if the swagger schemas had these tags, then k8s-openapi could optionally enable prost-tagging, but based on the existance of kubernetes/api repo maybe they don't want to evolve the swagger schemas anymore? Maybe it's worth requesting upstream?

    client-gold 
    opened by clux 26
  • declarative openapi validation rules

    declarative openapi validation rules

    It would be really cool if we had a way to create declarative openapi validation rules ala:

    • https://book.kubebuilder.io/reference/generating-crd.html
    • https://kubernetes.io/docs/tasks/access-kubernetes-api/custom-resources/custom-resource-definitions/#validation

    e.g. something like:

    #[derive(OpenapiValidation, Deserialize, Serialize, Clone)]
    pub struct FooSpec {
        #[openapi(MinLength=5)
        name: String,
        info: String,
        #[openapi(Minimum=1, Maximum=10)]
        replicas: i32,
    }
    

    The thing this most reminds me of is serde attributes, but this feels a whole lot dumber than that. We just need to generate: { type: object, properties: ... } from a struct, so it's easy to attach onto the the crd json!

    The problem with this is that it would necessarily conflict with serde field attributes, which are also frequently used. A dumb proc macro that wasn't aware of serde wouldn't be hard to do, but I imagine it also would cause a lot of duplication between between serde attrs like skip_serializing_if, default with validation rules.

    derive 
    opened by clux 26
  • macOS Security Framework fails to import modern PKCS#12 created by OpenSSL 3

    macOS Security Framework fails to import modern PKCS#12 created by OpenSSL 3

    Hi,

    Im having the following error when running on macbook (M1) with a k3s cluster that was created by k3d:

     cargo run
        Finished dev [unoptimized + debuginfo] target(s) in 0.19s
         Running `target/debug/test_kube`
    Error: SslError: MAC verification failed during PKCS12 import (wrong password?)
    

    This doesn't happen if I use GKE or token based authentication.

    I have a repository that reproduces this on my machine: https://github.com/danni-m/PKCS12_issue. The kubeconfig file im using is:

    ---
    apiVersion: v1
    clusters:
    - cluster:
        certificate-authority-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkakNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdGMyVnkKZG1WeUxXTmhRREUyTXpVNU1qa3pNRGd3SGhjTk1qRXhNVEF6TURnME9ESTRXaGNOTXpFeE1UQXhNRGcwT0RJNApXakFqTVNFd0h3WURWUVFEREJock0zTXRjMlZ5ZG1WeUxXTmhRREUyTXpVNU1qa3pNRGd3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUblVCS1NsT2FpN1J6UTVEandKQktaZFNFYThSTWsvRXpONEJ4N1pnSkQKNjlFS2xRZXZERkl3Mm1rMVpSNzNFNytxR2VpRVlLLzhadW5Tb0tCNGtwdjdvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVUU0bjlOUzBUODdWaWx2d0hnZ2dSCjU1R1RhWXN3Q2dZSUtvWkl6ajBFQXdJRFJ3QXdSQUlnVlhBRUxBZW5IbVRhVU1GTjViaWZzaE9qYTN3VmRRVm4KTTRXZnhNT0VHWWtDSUdRRE1JT1Vtb2xtd2dEVmZabXo2bGt0Y2lxUHBtRHVxenNrZG9YZ0hiVXUKLS0tLS1FTkQgQ0VSVElGSUNBVEUtLS0tLQo=
        server: https://0.0.0.0:58362
      name: k3d-testing
    contexts:
    - context:
        cluster: k3d-testing
        user: admin@k3d-testing
      name: k3d-testing
    current-context: k3d-testing
    kind: Config
    preferences: {}
    users:
    - name: admin@k3d-testing
      user:
        client-certificate-data: LS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJrVENDQVRlZ0F3SUJBZ0lJT01iMWYvOHk4a293Q2dZSUtvWkl6ajBFQXdJd0l6RWhNQjhHQTFVRUF3d1kKYXpOekxXTnNhV1Z1ZEMxallVQXhOak0xT1RJNU16QTRNQjRYRFRJeE1URXdNekE0TkRneU9Gb1hEVEl5TVRFdwpNekE0TkRneU9Gb3dNREVYTUJVR0ExVUVDaE1PYzNsemRHVnRPbTFoYzNSbGNuTXhGVEFUQmdOVkJBTVRESE41CmMzUmxiVHBoWkcxcGJqQlpNQk1HQnlxR1NNNDlBZ0VHQ0NxR1NNNDlBd0VIQTBJQUJQbHMxY2ZYdGpQVWIxdUMKZFhCbHVydTBZU3pqR0pGWlEzYXRUTHFDT1FTNlFDQUNMcW5NY29scy82aGhBL2RXY3JJdmFFS1VpWHZIMUs5dApzQkw5OEZxalNEQkdNQTRHQTFVZER3RUIvd1FFQXdJRm9EQVRCZ05WSFNVRUREQUtCZ2dyQmdFRkJRY0RBakFmCkJnTlZIU01FR0RBV2dCU0VzRDdNZElVQnExNGJWdFoybjJ0S1pOMnY4REFLQmdncWhrak9QUVFEQWdOSUFEQkYKQWlBRzB2Yjk4dzV4ekVIL2tORTNBOGh1TmMwRG42N08yMS9WUmtzbFloSWN6Z0loQU9SZlVnSVpaWm40WU54egoydGpUMkhHdUlXS0QvVzVRdGp3Uk5pZjFWWEZjCi0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0KLS0tLS1CRUdJTiBDRVJUSUZJQ0FURS0tLS0tCk1JSUJkekNDQVIyZ0F3SUJBZ0lCQURBS0JnZ3Foa2pPUFFRREFqQWpNU0V3SHdZRFZRUUREQmhyTTNNdFkyeHAKWlc1MExXTmhRREUyTXpVNU1qa3pNRGd3SGhjTk1qRXhNVEF6TURnME9ESTRXaGNOTXpFeE1UQXhNRGcwT0RJNApXakFqTVNFd0h3WURWUVFEREJock0zTXRZMnhwWlc1MExXTmhRREUyTXpVNU1qa3pNRGd3V1RBVEJnY3Foa2pPClBRSUJCZ2dxaGtqT1BRTUJCd05DQUFUMWVDVlhyeTc4V3RSaS9iekQ4c1pZbkIwOUVTZDJCK0lHdDR6c0tTdXMKbWJERGZOWTdvSjhwWGJTeUpyNWkvdCs5VXBoTWljbGVYcHZFUjQycHhLaEZvMEl3UURBT0JnTlZIUThCQWY4RQpCQU1DQXFRd0R3WURWUjBUQVFIL0JBVXdBd0VCL3pBZEJnTlZIUTRFRmdRVWhMQSt6SFNGQWF0ZUcxYldkcDlyClNtVGRyL0F3Q2dZSUtvWkl6ajBFQXdJRFNBQXdSUUloQU94bmIrSTZYWEdFNFJ0RWpnelhacG9nOVZxMWJ0cGgKSklWcTRNbllseWJvQWlCNUQ3anFudjRnZGUrRFJFLzRtQ2Uxdk16d1JUSm9ZbnJTcUx3a2VNeGpodz09Ci0tLS0tRU5EIENFUlRJRklDQVRFLS0tLS0K
        client-key-data: LS0tLS1CRUdJTiBFQyBQUklWQVRFIEtFWS0tLS0tCk1IY0NBUUVFSUtiU3B6N0NXdFNLZ3FaUHhHWm9tZTZCa1Z6RGxEbkxCRjF4MzFMZEh5dDBvQW9HQ0NxR1NNNDkKQXdFSG9VUURRZ0FFK1d6Vng5ZTJNOVJ2VzRKMWNHVzZ1N1JoTE9NWWtWbERkcTFNdW9JNUJMcEFJQUl1cWN4eQppV3ovcUdFRDkxWnlzaTlvUXBTSmU4ZlVyMjJ3RXYzd1dnPT0KLS0tLS1FTkQgRUMgUFJJVkFURSBLRVktLS0tLQo=
    
    bug client blocked macos 
    opened by danni-m 21
  • chrono not maintaned - CVE-2020-26235 in time

    chrono not maintaned - CVE-2020-26235 in time

    crono depends on vulnerable version of time and it seems not to be well maintained. Could you maybe replace it completely with some newer version of time as suggested here.

    invalid dependencies 
    opened by acim 21
  • Upgrade `k8s-openapi` to 0.16

    Upgrade `k8s-openapi` to 0.16

    • [x] cargo upgrade k8s-openapi --workspace for 0.16.0
    • [x] remove references to api resources gone in 1.25
    • [x] bump our MK8SV and local pin of k8s-openapi to v1_20 using just bump-k8s
    • [x] fix k3d CI issue
    • [x] reproduce v1_25 issue #997 (it's worse than expected)
    • [x] report findings upstream https://github.com/kubernetes/kubernetes/issues/111985

    TL;DR: upstream bug is bad, breaking, and we need to do something to support v1_25, but we cannot detect it, so it needs to be user-opt in. Have left the evar-based fix in a branch. See comments below.

    dependencies changelog-change 
    opened by clux 20
  • Mac: My cluster isn't trusted

    Mac: My cluster isn't trusted

    kube-rs: 0.17.1 I'm trying to interact with a GCP hosted cluster. The cluster certificate is self-signed. When I start my application I see errors like this:

    Error: Error { inner: Error(Hyper(Error(Connect, Custom { kind: Other, error: Error { code: -67843, message: "The certificate was not trusted." } })), "https://<redacted cluster IP>/api/v1/persistentvolumes?")
    
    Error executing request }
    

    If I update my client OS and tell it to trust the certificate then the problem disappears, so I guess the problem is related to the library not realising that it needs to process the cluster certificate somehow. I had a trawl around in the source, but couldn't see anything obviously wrong. There seemed to be some calls to add_root_certificate, but I wasn't sure if they were being called or if I needed to configure my client somehow or...?

    Wish I could file something more useful, but maybe that's enough detail for someone to point me towards a solution.

    (BTW: I can't employ my certificate work-around in real life, that was just to help understand the problem.)

    bug help wanted config macos 
    opened by garyanaplan 20
  • Remove implicit state from api::Api

    Remove implicit state from api::Api

    Most of this is redundant anyway..

    • apiVersion/kind: Can be linked to K instead
      • This would also allow us to share a single Api instance, rather than setting up a separate Api<K> per resource type, which should make connection pooling easier
    • namespace (write actions): can be inferred from object metadata
    • namespace (read actions): should probably be taken explicitly instead
    api 
    opened by teozkr 19
  • Reconciliation reason gets stuck as

    Reconciliation reason gets stuck as "reconciler requested retry" even if short-circuited

    Current and expected behavior

    Currently, a reconciler that returns Ok(controller::Action::requeue(Duration::from_secs(10))) will always have the reconciliation reason be listed as "reconciler requested retry", even if they are triggered by an actual change. Instead, the overriding change should take priority and be listed instead.

    Possible solution

    Currently, the controller (scheduler, actually) latches on to the first reason that it sees to schedule any given reconciliation. Instead, it should use the reason behind the earliest time to reconcile, just like how the scheduler currently allows subsequent requests to cause the reconciliation to happen earlier.

    I suspect that this means that ScheduleRequest will need to take an extra reason parameter, which we'd move into ScheduledEntry.

    Additional context

    No response

    Environment

    This is a kube-runtime issue, the cluster is irrelevant.

    Configuration and features

    kube = { version = "0.77.0", features = ["runtime"] }
    

    Affected crates

    kube-runtime

    Would you like to work on fixing this bug?

    yes

    bug runtime 
    opened by teozkr 0
  • Upgrading to `0.77` on IPv6 cluster results in `hostname mismatch`

    Upgrading to `0.77` on IPv6 cluster results in `hostname mismatch`

    Current and expected behavior

    Upgraded to 0.77 of kube for our controller, which uses the rustls-tls feature, and now, we are seeing the follow errors on creating clients:

    2022-12-22T19:02:42.827284Z ERROR kube_client::client::builder: failed with error error trying to connect:
    error:0A000086:SSL routines:tls_post_process_server_certificate:certificate verify failed:ssl/state/statem_clnt.c:1887:
    hostname mismatch at /src/.cargo/registry/src/github.com-1ecc6299db9ec823/kube-client-0.77.0/src/client/builder.rs:157 
    

    We are using

    let k8s_client = kube::client::Client::try_default()
      .await
      .context(controller_error::ClientCreateSnafu)?;
    

    to create an inferred client from the environment.

    I dropped the following into my code just before the client is created to see what was going on and redeployed the controller:

    let infered_config = kube::Config::infer().await;
    
    match infered_config {
        Ok(v) => println!("infered config: {v:?}"),
        Err(e) => println!("error: {e:?}"),
    }
    

    Here's what it looks like with 0.76:

    infered config: Config {
      cluster_url: https://kubernetes.default.svc/,
      default_namespace: "brupop-bottlerocket-aws",
      root_cert: ...
    }
    

    Here's what it looks like with 0.77:

    infered config: Config { 
      cluster_url: https://[fdc1:cad0:d971::1]/, 
      default_namespace: "brupop-bottlerocket-aws", 
      root_cert: ...
    }
    

    I'm guessing this is likely related to https://github.com/kube-rs/kube/issues/1003. However, my assumption reading the following code:

    https://github.com/kube-rs/kube/blob/7b9c4fe24e5fb57d2689cac56cf966ec9618f6f4/kube-client/src/config/mod.rs#L219-L235

    leads me to believe this should be covered and we'd get a similar behavior to when incluster_dns was being called (instead of manually modifying the cluster_url field). What am I missing? Why is the IP being evaluated here instead of replaced?

    Environment

    ❯ kubectl version --output yaml
    clientVersion:
      buildDate: "2022-09-21T14:33:49Z"
      compiler: gc
      gitCommit: 5835544ca568b757a8ecae5c153f317e5736700e
      gitTreeState: clean
      gitVersion: v1.25.2
      goVersion: go1.19.1
      major: "1"
      minor: "25"
      platform: linux/amd64
    kustomizeVersion: v4.5.7
    serverVersion:
      buildDate: "2022-10-24T20:32:54Z"
      compiler: gc
      gitCommit: b07006b2e59857b13fe5057a956e86225f0e82b7
      gitTreeState: clean
      gitVersion: v1.21.14-eks-fb459a0
      goVersion: go1.16.15
      major: "1"
      minor: 21+
      platform: linux/amd64
    

    Configuration and features

    Here's my change in our Cargo.toml:

    kube = { version = "0.77.0", default-features = true, features = [ "derive", "runtime", "rustls-tls" ] }
    
    ❯ cargo tree -i k8s-openapi
    k8s-openapi v0.16.0
    ├── agent v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/agent)
    ├── apiserver v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/apiserver)
    │   └── agent v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/agent)
    ├── controller v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/controller)
    ├── integ v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/integ)
    ├── kube v0.77.0
    │   ├── agent v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/agent)
    │   ├── apiserver v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/apiserver) (*)
    │   ├── controller v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/controller)
    │   ├── integ v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/integ)
    │   └── models v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/models)
    │       ├── agent v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/agent)
    │       ├── apiserver v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/apiserver) (*)
    │       ├── controller v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/controller)
    │       └── integ v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/integ)
    │       [build-dependencies]
    │       └── yamlgen v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/yamlgen)
    │       [dev-dependencies]
    │       ├── apiserver v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/apiserver) (*)
    │       └── integ v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/integ)
    │   [build-dependencies]
    │   └── yamlgen v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/yamlgen)
    ├── kube-client v0.77.0
    │   ├── kube v0.77.0 (*)
    │   └── kube-runtime v0.77.0
    │       └── kube v0.77.0 (*)
    ├── kube-core v0.77.0
    │   ├── kube v0.77.0 (*)
    │   └── kube-client v0.77.0 (*)
    ├── kube-runtime v0.77.0 (*)
    └── models v0.1.0 (/home/ubuntu/workspace/bottlerocket-os/bottlerocket-update-operator/models) (*)
    
    ❯ cargo tree | grep kube
    │   ├── kube v0.77.0
    │   │   ├── kube-client v0.77.0
    │   │   │   ├── kube-core v0.77.0
    │   │   ├── kube-core v0.77.0 (*)
    │   │   ├── kube-derive v0.77.0 (proc-macro)
    │   │   └── kube-runtime v0.77.0
    │   │       ├── kube-client v0.77.0 (*)
    │   │   ├── kube v0.77.0 (*)
    ├── kube v0.77.0 (*)
    ├── kube v0.77.0 (*)
    ├── kube v0.77.0 (*)
    ├── kube v0.77.0 (*)
    
    bug rustls 
    opened by jpmcb 5
  • Write an in-memory apiserver

    Write an in-memory apiserver

    What problem are you trying to solve?

    A thing that continually comes up in the concept of controller testing is being able to run the reconciler and verify that it does the right thing.

    In complex scenarios this is difficult for users to do right now without a semi-functioning apiserver.

    We currently recommend using a mock client:

    let (mock_service, handle) = tower_test::mock::pair::<Request<Body>, Response<Body>>();
    let mock_client = Client::new(mock_service, "default");
    

    and pass that into the reconciler's context where we intercept the api calls and return some reasonable information. See controller-rs's fixtures.rs and controller.rs for test invocations.

    It is perfectly possible to do that in tests (and we do) like this particular wrapper around tower_test::mock::Handle<Request<Body>, Response<Body>> responding to an Event being POSTed, while also checking some properties of that data:

    async fn handle_event_create(mut self, reason: String) -> Result<Self> {
        let (request, send) = self.0.next_request().await.expect("service not called");
        assert_eq!(request.method(), http::Method::POST);
        assert_eq!(
            request.uri().to_string(),
            format!("/apis/events.k8s.io/v1/namespaces/testns/events?")
        );
        // verify the event reason matches the expected
        let req_body = to_bytes(request.into_body()).await.unwrap();
        let postdata: serde_json::Value =
            serde_json::from_slice(&req_body).expect("valid event from runtime");
        dbg!("postdata for event: {}", postdata.clone());
        assert_eq!(
            postdata.get("reason").unwrap().as_str().map(String::from),
            Some(reason)
        );
        // then pass through the body
        send.send_response(Response::builder().body(Body::from(req_body)).unwrap());
        Ok(self)
    }
    

    The problem with this approach is that:

    • it is verbose (lots of Request / Body / Response / serde_json::Value fiddling)
    • it requires user tests implementing apiserver expected behavior
    • it mixes apiserver imitation behavior with test assertion logic

    Describe the solution you'd like

    Create a dumb, in-memory apiserver that does the bare minimum of what the apiserver does, and presents a queryable interface that can give us what is in "its database" through some type downcasting.

    This server could treat every object as a DynamicObject storing what it sees in a HashMap<ObjectRef, DynamicObject> as an initial memory backing.

    If this was made pluggable into tower_test::mock, users can hook it into tests around a reconciler without the tests failing due to bad apiserver responses, without having to have users know all the ins and outs of apiserver mechanics (and crucially without giving them the opportunity to get this wrong).

    Implementation

    We would need at the very least implement basic functionality around metadata on objects:

    • POSTs need to fill in plausible creationTimestamp, uid, resourceVersion, generation, populate name from generateName
    • and prevent clients from overriding read-only values like creationTimestamp / uid / resource_version / generation
    • Respond to queries after storing the query result in the HashMap
    • DELETEs need to traverse ownerReferences

    Implementing create/replace/delete/get calls on resources plus most calls on subresources "should not be too difficult" to do in this context and will benefit everyone.

    The real problem here would be implementing patch in a sensible way:

    • json patches need to actually act on the dynamic object
    • apply patches need to follow kubernetes merge rules and actually do what the apiserver does
    • merge patches, strategic merges with patch strategies need to be followed

    Some of this sounds very hard, but it's possible some of it can be cobbled together using existing ecosystem pieces like:

    Anyway, just thought I would write down my thoughts on this. It feels possible, but certainly a bit of a spare time project. If anyone wants this, or would like to tackle this, let us know. I have too much on my plate for something like this right now, but personally I would love to have something like this if it can be done in a sane way.

    Documentation, Adoption, Migration Strategy

    can be integrated into controller-rs plus the controller guide on kube.rs as a start.

    help wanted question 
    opened by clux 2
  • API::delete behavior and idempotency

    API::delete behavior and idempotency

    Would you like to work on this feature?

    None

    What problem are you trying to solve?

    I'm trying to delete a kubernetes object using Api::delete

    Describe the solution you'd like

    When deleting the object, for my use-case I would like to know if

    • the deletion has started and/or is ongoing
    • the object is fully deleted

    My understanding of Api::delete was that I would use Ok(Either::Left) to determine deletion is ongoing, Ok(Either::Right) to determine it's completed, and that all errors (Err) are of a more general nature.

    However once testing this, I noticed that trying to delete an already deleted object yields an Err. This makes sense according to the documentation

    4XX and 5XX status types are returned as an Err(kube_client::Error::Api).

    But I don't think it's a natural mapping to the delete semantics, which should be idempotent. I would have expected a deletion attempt on an object that is not available (404) to be returned via Ok(Either::Right), which is the same as deleting that is immediately gone. In both cases the result for the caller is that we are sure the object is no longer there.

    The current setup requires to special case Err(kube::Error::Api(api_error)) if api_error.code == 404 on the caller side and treat it the same as Ok(Either::Right) if one wants to achieve idempotent deletion. In fact in my setup I wasn't even able to observe an Ok(Either::Right) - deletion directly went from Ok(Either::Left) to a 404 error - which I initially not anticipated due to the API description.

    I obviously realize a change in behavior would be a breaking change, and that automatically trying to treat 404s and 200 results the same might be a loss of information for some applications, so there's obviously a drawback too.

    An other related question I had is what inspection users are expected to be able to perform with a Either::Right, if they just want know that the object is gone. The documentation mentions potentially different status 2xx status codes, but I couldn't find any references in kubernetes docs on whether something else than a 200 status would be expected and what the meaning of other codes is. If someone has a description, it might be helpful to add this to the Api::delete docs.

    Describe alternatives you've considered

    Manually special casing the 404 error - as described above

    Documentation, Adoption, Migration Strategy

    No response

    Target crate for feature

    No response

    help wanted api 
    opened by Matthias247 1
  • Sharing watch streams and caches between Controllers

    Sharing watch streams and caches between Controllers

    Would you like to work on this feature?

    maybe

    What problem are you trying to solve?

    Currently, our Controller machinery creates watch streams and a single reflector for the main watch stream only, and these streams/caches are fully managed and internal (except for the single reflector reader).

    This means there is no good way to share streams between other controllers (because the other controller would similarly set up its own watches). This means currently kube_runtime is best suited for smaller controllers and not larger controller manager style ones we see in go.

    I would like to let users configure the watch stream(s) themselves so the streams and caches can be shared between controller instances as a minimum, and try to do this with the least amount of ergonomic pain points in the existing Controller Api. It is currently possible to do this with the applier, but the applier is certified hard mode for most users.

    This has come up before in #824

    Describe the solution you'd like

    This issue aims to start the conversation with 3 ideas:

    1. lift the creation of the controller's QueueStream into a separate builder that can take arbitrary watches
    2. create a full StreamCache container for various watch-listparam pairs
    3. change Controller::watches and Controller::owns to take a watchstream rather than an Api

    NB: A non-goal of this issue is solving the much harder problem of two controllers watching the same api with different ListParams (one might be less strict, so one watch could be a subset of another). This is very hard to do because the ListParam is intrinsic to the watch, and you can have totally orthogonal ListParams that watches only certain labels. It is potentially possible to analytically figure out the largest subset of a full watch, and then somehow filter events down the relevant events locally (using some kind of watcher interface), but that would be quite hard to do, so not going to talk about that at all here - feel free to write an issue for it!

    QueueStream Abstraction

    A first (bad) idea I had; take the QueueStream builder and make it top-level to try to re-use it between controllers (pseudo-code):

    let qs = QueueStream::for(mainapi, lp1).owns(ownedapi, lp2).watches(watchedapi, lp3, mapper);
    let ctrl = Controller::from(&qs).run(...)
    // re-using
    let stream1 = qs.get_stream(typemeta_for_main_api);
    let stream2 = qs.get_stream(typemeta_for_owned_api);
    // create another queuestream from these underlying streams
    let qs2 = QueueStream::for_stream(steam2).watches_stream(stream1, mapper)
    

    This feels very awkward. Internally it needs to know the params, and relations to apply to each api before it can be passed to the Controller, but the stream it needs to have exposed is the stream before it applies any mapping relations. It also needs two sets of constructors (one for streams and one for api-lp pairs).

    StreamCache Abstraction

    A possibly more direct translation of what we have in go? We make a literal map of streams that can be used in many controllers:

    // init with api-lp pairs
    let scache = StreamCache::new().add(api1, lp1).add(api2, lp2).add(api3, lp3);
    // grab one watch stream
    let crdstream: impl Stream = scache.get(typemeta_for_api1).unwrap();
    // get a cache for it
    let (crdreader, crdwriter) = reflector::store::<MyCrd>();
    let crd_cache = reflector(crdwriter, crdstream);
    // get another stream
    let blahstream = scache.get(typemeta_for_api2).unwrap();
    
    // create a controller using these two watch streams:
    let ctrl1 = Controller::for(crdstream, crd_cache).owns(blahstream)
    
    // get the third stream with cache
    let crdstream2 = scache.get(typemeta_for_api3);
    let (crdreader2, crdwriter2) = reflector::store::<My2ndCrd>();
    let crd_cache2 = reflector(crdwriter2, crdstream2);
    
    // create a controller for the second crd (api3) but re-using the watch from api2 with a different relation
    let ctrl2 = Controller::for(crdstream2, crd_cache2).watches(bladstream, mapper)
    

    I'm not a huge fan of this because the the cache struct currently does not do much. It just stores the streams, but the user has to do all of the reflector mapping themselves.

    Maybe it is possible here to do a StreamCache::into_queuestream and have that be accepted by a Controller, but having difficulties envisioning this as the right path.

    Controller only takes streams, no cache struct

    If we are already changing the controller to take streams rather than api-lp pairs, we should possibly encourage direct ownership for now and just teach users to deal with the streams. We have already done so much work on WatchStreamExt anyway.

    let api1: Api<Kind1> = ...
    let api2: Api<Kind2> = ...
    let api3: Api<Kind3> = ...
    let (reader1, writer1) = reflector::store::<Kind1>();
    let (reader3, writer3) = reflector::store::<Kind3>();
    let watch1 = reflector(writer1, watcher(api1, lp1));
    let watch2 = watcher(api2, lp2); // this one we don't need a cache for
    let watch3 = reflector(write3, watcher(api3, lp3));
    
    let ctrl1 = Controller::for(api1, writer1).owns(watch2);
    let ctrl2 = Controller::for(api3, writer3).watches(watch2, mapper);
    

    This feels pretty natural, all ownership is managed in main, but it leaves a lot up to the user (in terms of gluing), so there is a lot of chances for the user to map the wrong listparams to the wrong type (with potentially hard to decipher errors if we don't introduce more telemetry), so it could be a somewhat painful journey.

    I think this is ultimately the most sensible starting point though.

    Maybe this can be done with some helper struct that can be fed into the Controller in some way that minimises potential user errors. Ideas welcome.

    Documentation, Adoption, Migration Strategy

    Will write a guide for this on kube.rs once we have something.

    Target crate for feature

    kube-runtime

    help wanted question runtime 
    opened by clux 0
Releases(0.78.0)
  • 0.78.0(Jan 6, 2023)

    Kubernetes Bump

    This release brings in the new k8s-openapi release for 1.26 structs, and sets our MK8SV to 1.21. Be sure to upgrade k8s-openapi and kube simultaneously to avoid multiple version errors:

    cargo upgrade -p k8s-openapi -p kube -i
    

    What's Changed

    Added

    • reflector: add helper function to the Store by @eliad-wiz in https://github.com/kube-rs/kube/pull/1111

    Changed

    Removed

    • Remove deprecated Config::timeout by @clux in https://github.com/kube-rs/kube/pull/1113

    Fixed

    • fix shell exec exiting message loop when terminalSizeReceiver is dropped by @erebe in https://github.com/kube-rs/kube/pull/1112

    New Contributors

    • @eliad-wiz made their first contribution in https://github.com/kube-rs/kube/pull/1111
    • @erebe made their first contribution in https://github.com/kube-rs/kube/pull/1112

    Full Changelog: https://github.com/kube-rs/kube/compare/0.77.0...0.78.0

    Source code(tar.gz)
    Source code(zip)
  • 0.77.0(Dec 15, 2022)

    Highlights

    This release saw numerous improvements across various parts of the codebase with lots of help from external contributors. Look for improvements in error handling, client exec behaviour, dynamic object conversion, certificate handling, and last, but not least; lots of enhancements in the config module. Huge thanks to everyone who contributed!

    Config Enhancements

    Kubeconfigs relying on ExecConfig for auth should now work with a lot more cases (with improvements to script interactivity, cert passing, env-drop, and windows behaviour). We further aligned our Kubeconfig parsing with client-go's behaviour, and also exposed Kubeconfig::merge. Finally, we now pass Config::tls_server_name through to the Client, which has let us include a better rustls workaround for the long-standing ip issue (enabled by default).

    What's Changed

    Added

    • Add DynamicObjects::try_parse for typed object conversion by @jmintb in https://github.com/kube-rs/kube/pull/1061
    • Add ExecConfig::drop_env to filter host evars for auth providers by @aviramha in https://github.com/kube-rs/kube/pull/1062
    • Add support for terminal size when executing command inside a container by @armandpicard in https://github.com/kube-rs/kube/pull/983
    • add cmd-drop-env to AuthProviderConfig by @aviramha in https://github.com/kube-rs/kube/pull/1074
    • Check for client cert with exec by @rcanderson23 in https://github.com/kube-rs/kube/pull/1089
    • Change Kubeconfig::merge fn to public. by @goenning in https://github.com/kube-rs/kube/pull/1100
    • Fix interactivity in auth exec by @armandpicard in https://github.com/kube-rs/kube/pull/1083

    Changed

    • [windows] skip window creation on auth exec by @goenning in https://github.com/kube-rs/kube/pull/1095
    • Add Config::tls_server_name and validate when using rustls by @clux in https://github.com/kube-rs/kube/pull/1104

    Removed

    • Remove deprecated ResourceExt::name by @clux in https://github.com/kube-rs/kube/pull/1105

    Fixed

    • Bump tracing dependency to 0.1.36 by @teozkr in https://github.com/kube-rs/kube/pull/1070
    • Improve error message on azure auth not being supported by @goenning in https://github.com/kube-rs/kube/pull/1082
    • exec: ensure certs always end with a new line by @goenning in https://github.com/kube-rs/kube/pull/1096
    • fix: align kube-rs with client-go config parsing by @goenning in https://github.com/kube-rs/kube/pull/1077
    • Return error from watcher when kinds do not support watch by @clux in https://github.com/kube-rs/kube/pull/1101

    New Contributors

    • @jmintb made their first contribution in https://github.com/kube-rs/kube/pull/1061
    • @aviramha made their first contribution in https://github.com/kube-rs/kube/pull/1062
    • @armandpicard made their first contribution in https://github.com/kube-rs/kube/pull/983
    • @suryapandian made their first contribution in https://github.com/kube-rs/kube/pull/1081
    • @rcanderson23 made their first contribution in https://github.com/kube-rs/kube/pull/1089

    Full Changelog: https://github.com/kube-rs/kube/compare/0.76.0...0.77.0

    Source code(tar.gz)
    Source code(zip)
  • 0.76.0(Oct 28, 2022)

    Highlights

    #[derive(CustomResource)] now supports schemas with untagged enums

    Expanding on our existing support for storing Rust's struct enums in CRDs, Kube will now try to convert #[serde(untagged)] enums as well. Note that if the same field is present in multiple untagged variants then they must all have the same shape.

    Removed deprecated try_flatten_* functions

    These have been deprecated since 0.72, and are replaced by the equivalent WatchStreamExt methods.

    What's Changed

    Added

    • Adds example to Controller::watches by @Dav1dde in https://github.com/kube-rs/kube/pull/1026
    • Discovery: Add ApiGroup::resources_by_stability by @imuxin in https://github.com/kube-rs/kube/pull/1022
    • Add support for untagged enums in CRDs by @sbernauer in https://github.com/kube-rs/kube/pull/1028
    • Derive PartialEq for DynamicObject by @pbzweihander in https://github.com/kube-rs/kube/pull/1048

    Removed

    • Runtime: Remove deprecated util try_flatten_ helpers by @clux in https://github.com/kube-rs/kube/pull/1019
    • Remove native-tls feature by @kazk in https://github.com/kube-rs/kube/pull/1044

    Fixed

    • add fieldManager querystring to all operations by @goenning in https://github.com/kube-rs/kube/pull/1031
    • Add verify_tls1x_signature for NoCertVerification by @rvql in https://github.com/kube-rs/kube/pull/1034
    • Fix compatibility with schemars' preserve_order feature by @teozkr in https://github.com/kube-rs/kube/pull/1050
    • Hoist enum values from subschemas by @teozkr in https://github.com/kube-rs/kube/pull/1051

    New Contributors

    • @Dav1dde made their first contribution in https://github.com/kube-rs/kube/pull/1026
    • @rvql made their first contribution in https://github.com/kube-rs/kube/pull/1034
    • @imuxin made their first contribution in https://github.com/kube-rs/kube/pull/1022

    Full Changelog: https://github.com/kube-rs/kube/compare/0.75.0...0.76.0

    Source code(tar.gz)
    Source code(zip)
  • 0.75.0(Sep 22, 2022)

    Highlights

    Upgrade k8s-openapi to 0.16 for Kubernetes 1.25

    The update to [email protected] makes this the first release with tentative Kubernetes 1.25 support. While the new structs and apis now exist, we recommend holding off on using 1.25 until a deserialization bug in the apiserver is resolved upstream. See #997 / #1008 for details.

    To upgrade, ensure you bump both kube and k8s-openapi:

    cargo upgrade kube k8s-openapi
    

    New/Old Config::incluster default to connect in cluster

    Our previous default of connecting to the Kubernetes apiserver via kubernetes.default.svc has been reverted back to use the old environment variables after Kubernetes updated their position that the environment variables are not legacy. This does unfortunately regress on rustls support, so for those users we have included a Config::incluster_dns to work around the old rustls issue while it is open.

    Controller error_policy extension

    The error_policy fn now has access to the object that failed the reconciliation to ease metric creation / failure attribution. The following change is needed on the user side:

    -fn error_policy(error: &Error, ctx: Arc<Data>) -> Action {
    +fn error_policy(_obj: Arc<YourObject>, error: &Error, ctx: Arc<Data>) -> Action {
    

    Polish / Subresources / Conversion

    There are also a slew of ergonomics improvements, closing of gaps in subresources, adding initial support for ConversionReview, making Api::namespaced impossible to use for non-namepaced resources (a common pitfall), as well as many great fixes to the edge cases in portforwarding and finalizers. Many of these changes came from first time contributors. A huge thank you to everyone involved.

    What's Changed

    Added

    • Make Config::auth_info public by @danrspencer in https://github.com/kube-rs/kube-rs/pull/959
    • Make raw Client::send method public by @tiagolobocastro in https://github.com/kube-rs/kube-rs/pull/972
    • Make types on AdmissionRequest and AdmissionResponse public by @clux in https://github.com/kube-rs/kube-rs/pull/977
    • Add #[serde(default)] to metadata field of DynamicObject by @pbzweihander in https://github.com/kube-rs/kube-rs/pull/987
    • Add create_subresource method to Api and create_token_request method to Api<ServiceAccount> by @pbzweihander in https://github.com/kube-rs/kube-rs/pull/989
    • Controller: impl Eq and PartialEq for Action by @Sherlock-Holo in https://github.com/kube-rs/kube-rs/pull/993
    • Add support for CRD ConversionReview types by @MikailBag in https://github.com/kube-rs/kube-rs/pull/999

    Changed

    • Constrain Resource trait and Api::namespaced by Scope by @clux in https://github.com/kube-rs/kube-rs/pull/956
    • Add connect/read/write timeouts to Config by @goenning in https://github.com/kube-rs/kube-rs/pull/971
    • Controller: Include the object being reconciled in the error_policy by @felipesere in https://github.com/kube-rs/kube-rs/pull/995
    • Config: New incluster and incluster_dns constructors by @olix0r in https://github.com/kube-rs/kube-rs/pull/1001
    • Upgrade k8s-openapi to 0.16 by @clux in https://github.com/kube-rs/kube-rs/pull/1008

    Fixed

    • Remove tracing::instrument from apply_debug_overrides by @kazk in https://github.com/kube-rs/kube-rs/pull/958
    • fix duplicate finalizers race condition by @alex-hunt-materialize in https://github.com/kube-rs/kube-rs/pull/965
    • fix: portforward connection cleanup by @tiagolobocastro in https://github.com/kube-rs/kube-rs/pull/973

    New Contributors

    • @danrspencer made their first contribution in https://github.com/kube-rs/kube-rs/pull/959
    • @alex-hunt-materialize made their first contribution in https://github.com/kube-rs/kube-rs/pull/965
    • @tiagolobocastro made their first contribution in https://github.com/kube-rs/kube-rs/pull/972
    • @goenning made their first contribution in https://github.com/kube-rs/kube-rs/pull/971
    • @pbzweihander made their first contribution in https://github.com/kube-rs/kube-rs/pull/987
    • @Sherlock-Holo made their first contribution in https://github.com/kube-rs/kube-rs/pull/993
    • @felipesere made their first contribution in https://github.com/kube-rs/kube-rs/pull/995

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.74.0...0.75.0

    Source code(tar.gz)
    Source code(zip)
  • 0.74.0(Jul 10, 2022)

    Highlights

    Polish, bug fixes, guidelines, ci improvements, and new contributors

    This release features smaller improvements/additions/cleanups/fixes, many of which are from new first-time contributors! Thank you everyone! The listed deadlock fix was backported to 0.73.1.

    We have also been trying to clarify and prove a lot more of our external-facing guarantees, and as a result:

    ResourceExt::name deprecation

    A consequence of all the policy writing and the improved clarity we have decided to deprecate the common ResourceExt::name helper.

    This method could panic and it is unexpected for the users and bad for our consistency. To get the old functionality, you can replace any .name() call on a Kubernetes resources with .name_unchecked(); but as the name implies, it can panic (in a local setting, or during admission). We recommend you replace it with the new ResourceExt::name_any for a general identifier:

    -pod.name()
    +pod.name_any()
    

    What's Changed

    Added

    • Add support for passing the fieldValidation query parameter on patch by @phroggyy in https://github.com/kube-rs/kube-rs/pull/929
    • Add conditions::is_job_completed by @clux in https://github.com/kube-rs/kube-rs/pull/935

    Changed

    • Deprecate ResourceExt::name in favour of safe name_* alternatives by @clux in https://github.com/kube-rs/kube-rs/pull/945

    Removed

    • Remove #[kube(apiextensions)] flag from kube-derive by @clux in https://github.com/kube-rs/kube-rs/pull/920

    Fixed

    • Document every public derived fn from kube-derive by @clux in https://github.com/kube-rs/kube-rs/pull/919
    • fix applier hangs which can happen with many watched objects by @moustafab in https://github.com/kube-rs/kube-rs/pull/925
    • Applier: Improve reconciler reschedule context to avoid deadlocking on full channel by @teozkr in https://github.com/kube-rs/kube-rs/pull/932
    • Fix deserialization issue in AdmissionResponse by @clux in https://github.com/kube-rs/kube-rs/pull/939
    • Admission controller example fixes by @Alibirb in https://github.com/kube-rs/kube-rs/pull/950

    New Contributors

    • @moustafab made their first contribution in https://github.com/kube-rs/kube-rs/pull/925
    • @phroggyy made their first contribution in https://github.com/kube-rs/kube-rs/pull/929
    • @Alibirb made their first contribution in https://github.com/kube-rs/kube-rs/pull/950

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.73.0...0.74.0

    Source code(tar.gz)
    Source code(zip)
  • 0.73.1(Jun 3, 2022)

    Highlights

    This patch release fixes a bug causing applier and Controller to deadlock when too many Kubernetes object change events were ingested at once. All users of applier and Controller are encouraged to upgrade as quickly as possible. Older versions are also affected, this bug is believed to have existed since the original release of kube_runtime.

    What's Changed

    Fixed

    • [0.73 backport] fix applier hangs which can happen with many watched objects (#925) by @moustafab (backported by @teozkr) in https://github.com/kube-rs/kube-rs/pull/927

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.73.0...0.73.1

    Source code(tar.gz)
    Source code(zip)
  • 0.73.0(May 23, 2022)

    Highlights

    New k8s-openapi version and MSRV

    Support added for Kubernetes v1_24 support via the new k8s-openapi version. Please also run cargo upgrade --workspace k8s-openapi when upgrading kube.

    This also bumps our MSRV to 1.60.0.

    Reconciler change

    A small ergonomic change in the reconcile signature has removed the need for the Context object. This has been replaced by an Arc. The following change is needed in your controller:

    -async fn reconcile(doc: Arc<MyObject>, context: Context<Data>) -> Result<Action, Error>
    +async fn reconcile(doc: Arc<MyObject>, context: Arc<Data>) -> Result<Action, Error>
    

    This will simplify the usage of the context argument. You should no longer need to pass .get_ref() on its every use. See the controller-rs upgrade change for details.

    What's Changed

    Added

    • Add Discovery::groups_alphabetical following kubectl sort order by @clux in https://github.com/kube-rs/kube-rs/pull/887

    Changed

    • Replace runtime::controller::Context with Arc by @teozkr in https://github.com/kube-rs/kube-rs/pull/910
    • runtime: Return the object from await_condition by @olix0r in https://github.com/kube-rs/kube-rs/pull/877
    • Bump k8s-openapi to 0.15 for kubernetes v1_24 and bump MSRV to 1.60 by @clux in https://github.com/kube-rs/kube-rs/pull/916

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.72.0...0.73.0

    Source code(tar.gz)
    Source code(zip)
  • 0.72.0(May 13, 2022)

    Highlights

    Ergonomics improvements

    A new runtime::WatchSteamExt (#899 + #906) allows for simpler setups for streams from watcher or reflector.

    - let stream = utils::try_flatten_applied(StreamBackoff::new(watcher(api, lp), b));
    + let stream = watcher(api, lp).backoff(b).applied_objects();
    

    The util::try_flatten_* helpers have been marked as deprecated since they are not used by the stream impls.

    A new reflector:store() fn allows simpler reflector setups #907:

    - let store = reflector::store::Writer::<Node>::default();
    - let reader = store.as_reader();
    + let (reader, writer) = reflector::store();
    

    Additional conveniences getters/settes to ResourceExt for manged_fields and creation_timestamp #888 + #898, plus a GroupVersion::with_kind path to a GVK, and a TryFrom<TypeMeta> for GroupVersionKind in #896.

    CRD Version Selection

    Managing multiple version in CustomResourceDefinitions can be pretty complicated, but we now have helpers and docs on how to tackle it.

    A new function kube::core::crd::merge_crds have been added (in #889) to help push crd schemas generated by kube-derived crds with different #[kube(version)] properties. See the kube-derive#version documentation for details.

    A new example showcases how one can manage two or more versions of a crd and what the expected truncation outcomes are when moving between versions.

    Examples

    Examples now have moved to tracing for its logging, respects RUST_LOG, and namespace selection via the kubeconfig context. There is also a larger kubectl example showcasing kubectl apply -f yaml as well as kubectl {edit,delete,get,watch} via #885 + #897.

    What's Changed

    Added

    • Allow merging multi-version CRDs into a single schema by @clux in https://github.com/kube-rs/kube-rs/pull/889
    • Add GroupVersion::with_kind and TypeMeta -> GroupVersionKind converters by @clux in https://github.com/kube-rs/kube-rs/pull/896
    • Add managed_fields accessors to ResourceExt by @clux in https://github.com/kube-rs/kube-rs/pull/898
    • Add ResourceExt::creation_timestamp by @clux in https://github.com/kube-rs/kube-rs/pull/888
    • Support lowercase http_proxy & https_proxy evars by @DevineLiu in https://github.com/kube-rs/kube-rs/pull/892
    • Add a WatchStreamExt trait for stream chaining by @clux in https://github.com/kube-rs/kube-rs/pull/899
    • Add Event::modify + reflector::store helpers by @clux in https://github.com/kube-rs/kube-rs/pull/907

    Changed

    • Switch to kubernetes cluster dns for incluster url everywhere by @clux in https://github.com/kube-rs/kube-rs/pull/876
    • Update tower-http requirement from 0.2.0 to 0.3.2 by @dependabot in https://github.com/kube-rs/kube-rs/pull/893

    Removed

    • Remove deprecated legacy crd v1beta1 by @clux in https://github.com/kube-rs/kube-rs/pull/890

    New Contributors

    • @DevineLiu made their first contribution in https://github.com/kube-rs/kube-rs/pull/892

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.71.0...0.72.0

    Source code(tar.gz)
    Source code(zip)
  • 0.71.0(Apr 12, 2022)

    Highlights

    Several quality of life changes and improvements this release such as for port-forwarding, a new ClientBuilder, better handling of kube-derive edge-cases.

    We highlight some changes here that you should be especially aware of:

    events::Recorder publishing to kube-system for cluster scoped resources

    Publishing events via Recorder for cluster scoped resources (supported since 0.70.0) now publish to kube-system rather than default, as all but the newest clusters struggle with publishing events in the default namespace.

    Default TLS stack set to OpenSSL

    The previous native-tls default was there because we used to depend on reqwest, but because we depended on openssl anyway the feature does not make much sense. Changing to openssl-tls also improves the situation on macOS where the Security Framework struggles with PKCS#12 certs from OpenSSL v3. The native-tls feature will still be available in this release in case of issues, but the plan is to decommission it shortly. Of course, we all ideally want to move to rustls, but we are still blocked by #153.

    What's Changed

    Added

    • Add ClientBuilder that lets users add custom middleware without full stack replacement by @teozkr in https://github.com/kube-rs/kube-rs/pull/855
    • Support top-level enums in CRDs by @sbernauer in https://github.com/kube-rs/kube-rs/pull/856

    Changed

    • portforward: Improve API and support background task cancelation by @olix0r in https://github.com/kube-rs/kube-rs/pull/854
    • Make remote commands cancellable and remove panics by @kazk in https://github.com/kube-rs/kube-rs/pull/861
    • Change the default TLS to OpenSSL by @kazk in https://github.com/kube-rs/kube-rs/pull/863
    • change event recorder cluster namespace to kube-system by @clux in https://github.com/kube-rs/kube-rs/pull/871

    Fixed

    • Fix schemas containing both properties and additionalProperties by @jcaesar in https://github.com/kube-rs/kube-rs/pull/845
    • Make dependency pins between sibling crates stricter by @clux in https://github.com/kube-rs/kube-rs/pull/864
    • Fix in-cluster kube_host_port generation for IPv6 by @somnusfish in https://github.com/kube-rs/kube-rs/pull/875

    New Contributors

    • @jcaesar made their first contribution in https://github.com/kube-rs/kube-rs/pull/845
    • @somnusfish made their first contribution in https://github.com/kube-rs/kube-rs/pull/875

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.70.0...0.71.0

    Source code(tar.gz)
    Source code(zip)
  • 0.70.0(Mar 20, 2022)

    Highlights

    Support for EC keys with rustls

    This was one of the big blockers for using rustls against clusters like k3d or k3s While not sufficient to fix using those clusters out of the box, it is now possible to use them with a workarodund

    More ergonomic reconciler

    The signature and end the Ok action in reconcile fns has been simplified slightly, and requires the following user updates:

    -async fn reconcile(obj: Arc<MyObject>, ctx: Context<Data>) -> Result<ReconcilerAction, Error> {
    -    ...
    -    Ok(ReconcilerAction {
    -        requeue_after: Some(Duration::from_secs(300)),
    -    })
    +async fn reconcile(obj: Arc<MyObject>, ctx: Context<Data>) -> Result<Action, Error> {
    +    ...
    +    Ok(Action::requeue(Duration::from_secs(300)))
    

    The Action import lives in the same place as the old ReconcilerAction.

    What's Changed

    Added

    • Add support for EC private keys by @farcaller in https://github.com/kube-rs/kube-rs/pull/804
    • Add helper for creating a controller owner_ref on Resource by @clux in https://github.com/kube-rs/kube-rs/pull/850

    Changed

    • Remove scheduler::Error by @teozkr in https://github.com/kube-rs/kube-rs/pull/827
    • Bump parking_lot to 0.12, but allow dep duplicates by @clux in https://github.com/kube-rs/kube-rs/pull/836
    • Update tokio-tungstenite requirement from 0.16.1 to 0.17.1 by @dependabot in https://github.com/kube-rs/kube-rs/pull/841
    • Let OccupiedEntry::commit take PostParams by @teozkr in https://github.com/kube-rs/kube-rs/pull/842
    • Change ReconcileAction to Action and add associated ctors by @clux in https://github.com/kube-rs/kube-rs/pull/851

    Fixed

    • Fix deadlock in token reloading by @clux in https://github.com/kube-rs/kube-rs/pull/830 - also in 0.69.1
    • Token reloading with RwLock by @kazk in https://github.com/kube-rs/kube-rs/pull/835
    • Fix event publishing for cluster scoped crds by @zhrebicek in https://github.com/kube-rs/kube-rs/pull/847
    • Fix invalid CRD when Enum variants have descriptions by @sbernauer in https://github.com/kube-rs/kube-rs/pull/852

    New Contributors

    • @chinatsu made their first contribution in https://github.com/kube-rs/kube-rs/pull/834
    • @farcaller made their first contribution in https://github.com/kube-rs/kube-rs/pull/804
    • @zhrebicek made their first contribution in https://github.com/kube-rs/kube-rs/pull/847
    • @sbernauer made their first contribution in https://github.com/kube-rs/kube-rs/pull/852

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.69.0...0.70.0

    Source code(tar.gz)
    Source code(zip)
  • 0.69.1(Feb 16, 2022)

    Highlights

    This is an emergency patch release fixing a bug in 0.69.0 where a kube::Client would deadlock after running inside a cluster for about a minute (#829).

    All users of 0.69.0 are encouraged to upgrade immediately. 0.68.x and below are not affected.

    What's Changed

    Fixed

    • [0.69.x] Fix deadlock in token reloading by @clux (backported by @teozkr) in https://github.com/kube-rs/kube-rs/pull/831

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.69.0...0.69.1

    Source code(tar.gz)
    Source code(zip)
  • 0.69.0(Feb 14, 2022)

    Highlights

    Ergonomic Additions to Api

    Two new methods have been added to the client Api this release to reduce the amount of boiler-plate needed for common patterns.

    In-cluster Token reloading

    Following a requirement for Kubernetes clients against versions >= 1.22.0, our bundled AuthLayer will reload tokens every minute when deployed in-cluster.

    What's Changed

    Added

    • Add conversion for ObjectRef<K> to ObjectReference by @teozkr in https://github.com/kube-rs/kube-rs/pull/815
    • Add Api::get_opt for better existence handling by @teozkr in https://github.com/kube-rs/kube-rs/pull/809
    • Entry API by @teozkr in https://github.com/kube-rs/kube-rs/pull/811

    Changed

    • Reload token file at least once a minute by @kazk in https://github.com/kube-rs/kube-rs/pull/768
    • Prefer kubeconfig over in-cluster config by @teozkr in https://github.com/kube-rs/kube-rs/pull/823

    Fixed

    • Disable CSR utilities on K8s <1.19 by @teozkr in https://github.com/kube-rs/kube-rs/pull/817

    New Contributors

    • @hasheddan made their first contribution in https://github.com/kube-rs/kube-rs/pull/813

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.68.0...0.69.0

    Source code(tar.gz)
    Source code(zip)
  • 0.68.0(Feb 1, 2022)

    Interface Changes

    To reduce the amount of allocation done inside the runtime by reflectors and controllers, the following change via #786 is needed on the signature of your reconcile functions:

    -async fn reconcile(myobj: MyK, ctx: Context<Data>) -> Result<ReconcilerAction>
    +async fn reconcile(myobj: Arc<MyK>, ctx: Context<Data>) -> Result<ReconcilerAction>
    

    This also affects the finalizer helper.

    Port-forwarding

    As one of the last steps toward gold level client requirements, port-forwarding landed in #446. There are 3 new examples (port_forward*.rs) that showcases how to use this websocket based functionality.

    What's Changed

    Added

    • Add a VS Code devcontainer configuration by @olix0r in https://github.com/kube-rs/kube-rs/pull/788
    • Add support for user impersonation by @teozkr in https://github.com/kube-rs/kube-rs/pull/797
    • Add port forward by @kazk in https://github.com/kube-rs/kube-rs/pull/446

    Changed

    • runtime: Store resources in an Arc by @olix0r in https://github.com/kube-rs/kube-rs/pull/786
    • Propagate Arc through the finalizer reconciler helper by @teozkr in https://github.com/kube-rs/kube-rs/pull/792
    • Disable unused default features of chrono crate by @dreamer in https://github.com/kube-rs/kube-rs/pull/801

    Fixed

    • Use absolute path to Result in derives by @teozkr in https://github.com/kube-rs/kube-rs/pull/795
    • core: add missing reason to Display on Error::Validation in Request by @clux in https://github.com/kube-rs/kube-rs/pull/798

    New Contributors

    • @dreamer made their first contribution in https://github.com/kube-rs/kube-rs/pull/801

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.67.0...0.68.0

    Source code(tar.gz)
    Source code(zip)
  • 0.67.0(Jan 25, 2022)

    What's Changed

    Changed

    • runtime: Replace DashMap with a locked AHashMap by @olix0r in https://github.com/kube-rs/kube-rs/pull/785
    • update k8s-openapi for kubernetes 1.23 support by @clux in https://github.com/kube-rs/kube-rs/pull/789

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.66.0...0.67.0

    Source code(tar.gz)
    Source code(zip)
  • 0.66.0(Jan 15, 2022)

    News

    Tons of small feature additions, and 3 new contributors in this milestone. Highlighted first is the 3 most discussed changes:

    Support for auto-generating schemas for enums in kube-derive

    It is now possible to embed complex enums inside structs that use #[derive(CustomResource)].

    This has been a highly requested feature since the inception of auto-generated schemas. It does not work for all cases, and has certain ergonomics caveats, but represents a huge step forwards.

    Note that if you depend on kube-derive directly rather than via kube then you must now add the schema feature to kube-core

    New StreamBackoff mechanism in kube-runtime

    To avoid spamming the apiserver when on certain watch errors cases, it's now possible to stream wrap the watcher to set backoffs. The new default_backoff follows existing client-go conventions of being kind to the apiserver.

    Initially, this is default-enabled in Controller watches (configurable via Controller::trigger_backoff) and avoids spam errors when crds are not installed.

    New version priority parser in kube-core

    To aid users picking the most appropriate version of a kind from api discovery or through a CRD, two new sort orders have been exposed on the new kube_core::Version

    What's Changed

    Added

    • Add DeleteParams constructors for easily setting PropagationPolicy by @kate-goldenring in https://github.com/kube-rs/kube-rs/pull/757
    • Add Serialize to ObjecList and add field-selector and jsonpath example by @ChinYing-Li in https://github.com/kube-rs/kube-rs/pull/760
    • Implement cordon/uncordon for Node by @ChinYing-Li in https://github.com/kube-rs/kube-rs/pull/762
    • Export Version priority parser with Ord impls in kube_core by @clux in https://github.com/kube-rs/kube-rs/pull/764
    • Add Api fns for arbitrary subresources and approval subresource for CertificateSigningRequest by @ChinYing-Li in https://github.com/kube-rs/kube-rs/pull/773
    • Support complex enums in CRDs by @teozkr in https://github.com/kube-rs/kube-rs/pull/779

    Changed

    • Add backoff handling for watcher and Controller by @clux in https://github.com/kube-rs/kube-rs/pull/703
    • Remove crate private identity_pem field from Config by @kazk in https://github.com/kube-rs/kube-rs/pull/771
    • Use SecretString in AuthInfo to avoid credential leaking by @ChinYing-Li in https://github.com/kube-rs/kube-rs/pull/766

    New Contributors

    • @kate-goldenring made their first contribution in https://github.com/kube-rs/kube-rs/pull/757
    • @ChinYing-Li made their first contribution in https://github.com/kube-rs/kube-rs/pull/760
    • @LyleScott made their first contribution in https://github.com/kube-rs/kube-rs/pull/775

    Full Changelog: https://github.com/kube-rs/kube-rs/compare/0.65.0...0.66.0

    Source code(tar.gz)
    Source code(zip)
  • 0.65.0(Dec 10, 2021)

    • BREAKING: Removed kube::Error::OpenSslError - #716 by @kazk
    • BREAKING: Removed kube::Error::SslError - #704 and #716 by @kazk
    • BREAKING: Added kube::Error::NativeTls(kube::client::NativeTlsError) for errors from Native TLS - #716 by @kazk
    • BREAKING: Added kube::Error::RustlsTls(kube::client::RustlsTlsError) for errors from Rustls TLS - #704 by @kazk
    • Modified Kubeconfig parsing - allow empty kubeconfigs as per kubectl - #721 by @kazk
    • Added Kubeconfig::from_yaml - #718 via #719 by @imp
    • Updated rustls to 0.20.1 - #704 by @clux and @kazk
    • BREAKING: Added ObjectRef to the object that failed to be reconciled to kube::runtime::controller::Error::ReconcileFailed - #733 by @teozkr
    • BREAKING: Removed api_version and kind fields from kind structs generated by kube::CustomResource - #739 by @teozkr
    • Updated tokio-tungstenite to 0.16 - #750 by @dependabot
    • Updated tower-http to 0.2.0 - #748 by @kazk
    • BREAKING: kube-client: replace RefreshTokenLayer with AsyncFilterLayer in AuthLayer - #752 by @kazk
    Source code(tar.gz)
    Source code(zip)
  • 0.64.0(Nov 16, 2021)

    • BREAKING: Replaced feature kube-derive/schema with attribute #[kube(schema)] - #690
      • If you currently disable default kube-derive default features to avoid automatic schema generation, add #[kube(schema = "disabled")] to your spec struct instead
    • BREAKING: Moved CustomResource derive crate overrides into subattribute #[kube(crates(...))] - #690
      • Replace #[kube(kube_core = .., k8s_openapi = .., schema = .., serde = .., serde_json = ..)] with #[kube(crates(kube_core = .., k8s_openapi = .., schema = .., serde = .., serde_json = ..))]
    • Added openssl-tls feature to use openssl for TLS on all platforms. Note that, even though native-tls uses a platform specific TLS, kube requires openssl on all platforms because native-tls only allows PKCS12 input to load certificates and private key at the moment, and creating PKCS12 requires openssl. - #700
    • BREAKING: Changed to fail loading configurations with PEM-encoded certificates containing invalid sections instead of ignoring them. Updated pem to 1.0.1. - #702
    • oauth: Updated tame-oauth to 0.6.0 which supports the same default credentials flow as the Go oauth2 for Google OAuth. In addition to reading the service account information from JSON file specified with GOOGLE_APPLICATION_CREDENTIALS environment variable, Application Default Credentials from gcloud, and obtaining OAuth tokens from local metadata server when running inside GCP are now supported. - #701

    Refining Errors

    We started working on improving error ergonomics. See the tracking issue #688 for more details.

    The following is the summary of changes to kube::Error included in this release:

    • Added Error::Auth(kube::client::AuthError) (errors related to client auth, some of them were previously in Error::Kubeconfig)
    • Added Error::BuildRequest(kube::core::request::Error) (errors building request from kube::core)
    • Added Error::InferConfig(kube::config::InferConfigError) (for Client::try_default)
    • Added Error::OpensslTls(kube::client::OpensslTlsError) (new openssl-tls feature) - #700
    • Added Error::UpgradeConnection(kube::client::UpgradeConnectinError) (ws feature, errors from upgrading a connection)
    • Removed Error::Connection (was unused)
    • Removed Error::RequestBuild (was unused)
    • Removed Error::RequestSend (was unused)
    • Removed Error::RequestParse (was unused)
    • Removed Error::InvalidUri (replaced by variants of errors in kube::config errors)
    • Removed Error::RequestValidation (replaced by a variant of Error::BuildRequest)
    • Removed Error::Kubeconfig (replaced by Error::InferConfig, and Error::Auth)
    • Removed Error::ProtocolSwitch (ws only, replaced by Error::UpgradeConnection)
    • Removed Error::MissingUpgradeWebSocketHeader (ws only, replaced by Error::UpgradeConnection)
    • Removed Error::MissingConnectionUpgradeHeader (ws only, replaced by Error::UpgradeConnection)
    • Removed Error::SecWebSocketAcceptKeyMismatch (ws only, replaced by Error::UpgradeConnection)
    • Removed Error::SecWebSocketProtocolMismatch (ws only, replaced by Error::UpgradeConnection)
    • Removed impl From<T> for Error
    Expand for more details

    The following breaking changes were made as a part of an effort to refine errors (the list is large, but most of them are lower level, and shouldn't require much change in most cases):

    • Removed impl From<E> for kube::Error - #686
    • Removed unused error variants in kube::Error: Connection, RequestBuild, RequestSend, RequestParse - #689
    • Removed unused error variant kube::error::ConfigError::LoadConfigFile - #689
    • Changed kube::Error::RequestValidation(String) to kube::Error::BuildRequest(kube::core::request::Error). Includes possible errors from building an HTTP request, and contains some errors from kube::core that was previously grouped under kube::Error::SerdeError and kube::Error::HttpError. kube::core::request::Error is described below. - #686
    • Removed kube::core::Error and kube::core::Result. kube::core::Error was replaced by more specific errors. - #686
      • Replaced kube::core::Error::InvalidGroupVersion with kube::core::gvk::ParseGroupVersionError
      • Changed the error returned from kube::core::admission::AdmissionRequest::with_patch to kube::core::admission::SerializePatchError (was kube::core::Error::SerdeError)
      • Changed the error associated with TryInto<AdmissionRequest<T>> to kube::core::admission::ConvertAdmissionReviewError (was kube::core::Error::RequestValidation)
      • Changed the error returned from methods of kube::core::Request to kube::core::request::Error (was kube::core::Error). kube::core::request::Error represents possible errors when building an HTTP request. The removed kube::core::Error had RequestValidation(String), SerdeError(serde_json::Error), and HttpError(http::Error) variants. They are now Validation(String), SerializeBody(serde_json::Error), and BuildRequest(http::Error) respectively in kube::core::request::Error.
    • Changed variants of error enums in kube::runtime to tuples. Replaced snafu with thiserror. - #686
    • Removed kube::error::ConfigError and kube::Error::Kubeconfig(ConfigError) - #696
      • Error variants related to client auth were moved to a new error kube::client::AuthError as described below
      • Remaining variants were split into kube::config::{InferConfigError, InClusterError, KubeconfigError} as described below
    • Added kube::client::AuthError by extracting error variants related to client auth from kube::ConfigError and adding more variants to preserve context - #696
    • Moved kube::error::OAuthError to kube::client::OAuthError - #696
    • Changed all errors in kube::client::auth to kube::client::AuthError - #696
    • Added kube::Error::Auth(kube::client::AuthError) - #696
    • Added kube::config::InferConfigError which is an error from Config::infer() and kube::Error::InferConfig(kube::config::InferConfigError) - #696
    • Added kube::config::InClusterError for errors related to loading in-cluster configuration by splitting kube::ConfigError and adding more variants to preserve context. - #696
    • Added kube::config::KubeconfigError for errors related to loading kubeconfig by splitting kube::ConfigError and adding more variants to preserve context. - #696
    • Changed methods of kube::Config to return these erorrs instead of kube::Error - #696
    • Removed kube::Error::InvalidUri which was replaced by error variants preserving context, such as KubeconfigError::ParseProxyUrl - #696
    • Moved all errors from upgrading to a WebSocket connection into kube::Error::UpgradeConnection(kube::client::UpgradeConnectionError) - #696
    Source code(tar.gz)
    Source code(zip)
  • 0.63.2(Oct 28, 2021)

  • 0.63.1(Oct 26, 2021)

  • 0.63.0(Oct 26, 2021)

    • rust edition bumped to 2021 - #664, #666, #667
    • kube::CustomResource derive can now take arbitrary #[kube(k8s_openapi)] style-paths for k8s_openapi, schemars, serde, and serde_json - #675
    • kube: fix native-tls included when only rustls-tls feature is selected - #673 via #674
    Source code(tar.gz)
    Source code(zip)
  • 0.62.0(Oct 22, 2021)

    • kube now re-exports kube-runtime under runtime feature - #651 via #652
      • no need to keep both kube and kube_runtime in Cargo.toml anymore
      • fixes issues with dependabot / lock-step upgrading
      • change kube_runtime::X import paths to kube::runtime::X when moving to the feature
    • kube::runtime added events module with an event Recorder - #249 via #653 + #662 + #663
    • kube::runtime::wait::conditions added is_crd_established helper - #659
    • kube::CustomResource derive can now take an arbitrary #[kube_core] path for kube::core - #658
    • kube::core consistently re-exported across crates
    • docs: major overhaul + architecture.md - #416 via #652
    Source code(tar.gz)
    Source code(zip)
  • 0.61.0(Oct 9, 2021)

    • kube-core: BREAKING: extend CustomResourceExt trait with ::shortnames method (impl in kube-derive) - #641
    • kube-runtime: add wait module to await_condition, and added watch_object to watcher - #632 via #633
    • kube: add Restart marker trait to allow Api::restart on core workloads - #630 via #635
    • bump dependencies: tokio-tungstenite, k8s-openapi, schemars, tokio in particular - #643 + #645
    Source code(tar.gz)
    Source code(zip)
  • 0.60.0(Sep 2, 2021)

    • kube: support k8s-openapi with v1_22 features - #621 via #622
    • kube: BREAKING: support for CustomResourceDefinition at v1beta1 now requires an opt-in deprecated-crd-v1beta1 feature - #622
    • kube-core: add content-type header to requests with body - #626 via #627
    Source code(tar.gz)
    Source code(zip)
  • 0.59.0(Aug 9, 2021)

    • BREAKING: bumped k8s-openapi to 0.13.0 - #581 via #616
    • kube connects to kubernetes via cluster dns when using rustls - #587 via #597
      • client now works with rustls feature in-cluster - #153 via #597
    • kube nicer serialization of Kubeconfig - #613
    • kube-core added serde traits for ApiResource - #590
    • kube-core added CrdExtensions::crd_name method (implemented by kube-derive) - #583
    • kube-core added the HasSpec and HasStatus traits - #605
    • kube-derive added support to automatically implement the HasSpec and HasStatus traits - #605
    • kube-runtime fix tracing span hierarchy from applier - #600
    Source code(tar.gz)
    Source code(zip)
  • 0.58.1(Jul 6, 2021)

  • 0.58.0(Jul 5, 2021)

    • kube: BREAKING: subresource marker traits renamed conjugation: Log, Execute, Attach, Evict (previously Logging, Executable, Attachable, Evictable) - #536 via #560
    • kube-derive added #[kube(category)] attr to set CRD categories - #559
    • kube-runtime added finalizer helper #291 via #475
    • kube-runtime added tracing for why reconciliations happened #457 via #571
    • kube-runtime added Controller::reconcile_all_on to allow scheduling all objects for reconciliation #551 via #555
    • kube-runtime added Controller::graceful_shutdown_on for shutting down the Controller while waiting for running reconciliations to finish - #552 via #573
    • BREAKING: controller::applier now starts a graceful shutdown when the queue terminates
    • BREAKING: scheduler now shuts down immediately when requests terminates, rather than waiting for the pending reconciliations to drain
    • kube-runtime added tracking for reconciliation reason
    • Added: Controller::owns_with and Controller::watches_with to pass a dyntype argument for dynamic Apis - #575
    • BREAKING: controller::trigger_* now returns a ReconcileRequest rather than ObjectRef. The ObjectRef can be accessed via the obj_ref field

    Known Issues

    • Api::replace can fail to unset list values with k8s-openapi 0.12 #581
    Source code(tar.gz)
    Source code(zip)
Owner
kube-rs
rust kubernetes client and controller runtime
kube-rs
A simple containerized application manage system like Kubernetes, but written in Rust

rMiniK8s A simple dockerized application management system like Kubernetes, written in Rust, plus a simple FaaS implementation. Course Project for SJT

markcty 15 Jul 8, 2023
oci-image and oci-runtime spec in rust.

oci-lib Oci-Spec for your container runtime or container registry. Oci-lib is a rust port for original oci spec written in go. Following crate contain

flouthoc 12 Mar 10, 2022
A tiny minimal container runtime written in Rust.

vas-quod A tiny minimal container runtime written in Rust. The idea is to support a minimal isolated containers without using existing runtimes, vas-q

flouthoc 438 Dec 26, 2022
Experimental implementation of the oci-runtime in Rust

youki Experimental implementation of the oci-runtime in Rust Overview youki is an implementation of runtime-spec in Rust, referring to runc. This proj

utam0k 12 Sep 23, 2022
youki is an implementation of the OCI runtime-spec in Rust, similar to runc.

youki is an implementation of the OCI runtime-spec in Rust, similar to runc.

Containers 4.2k Dec 29, 2022
Easy to use, extendable, OCI-compliant container runtime written in pure Rust

PURA - Lightweight & OCI-compliant container runtime Pura is an experimental Linux container runtime written in pure and dependency-minimal Rust. The

Branimir Malesevic 73 Jan 9, 2023
A secure container runtime with OCI interface

Quark Container Welcome to Quark Container. This repository is the home of Quark Containers code. What's Quark Container Quark Container is high perfo

null 175 Dec 29, 2022
dedock is a container runtime, with a particular focus on enabling embedded software development across all platforms

dedock is a container runtime, with a particular focus on enabling embedded software development across all platforms. It supports native "containers" on both Linux and macOS.

Daniel Mangum 12 May 27, 2023
Rust client for the huggingface hub aiming for minimal subset of features over `huggingface-hub` python package

This crates aims to emulate and be compatible with the huggingface_hub python package. compatible means the Api should reuse the same files skipping d

Hugging Face 9 Jul 20, 2023
Docker images for compiling static Rust binaries using musl-libc and musl-gcc, with static versions of useful C libraries. Supports openssl and diesel crates.

rust-musl-builder: Docker container for easily building static Rust binaries Source on GitHub Changelog UPDATED: Major updates in this release which m

Eric Kidd 1.3k Jan 1, 2023
Habitat is open source software that creates platform-independent build artifacts and provides built-in deployment and management capabilities.

Habitat is open source software that creates platform-independent build artifacts and provides built-in deployment and management capabilities. The go

Habitat 2.4k Dec 27, 2022
A buildpack for Rust applications on Heroku, with full support for Rustup, cargo and build caching.

Heroku buildpack for Rust This is a Heroku buildpack for Rust with support for cargo and rustup. Features include: Caching of builds between deploymen

Eric Kidd 502 Nov 7, 2022
Valheim Docker powered by Odin. The Valheim dedicated gameserver manager which is designed with resiliency in mind by providing automatic updates, world backup support, and a user friendly cli interface.

Valheim Docker If you are looking for a guide on how to get started click here Mod Support! It is supported to launch the server with BepInEx but!!!!!

Michael 657 Dec 30, 2022
An infrastructure-as-code and deployment tool for Roblox.

Rocat ?? An infrastructure-as-code and deployment tool for Roblox. ⚠ Please note that this is an early release and the API is unstable. Releases follo

Blake Mealey 45 Dec 29, 2022
An infrastructure-as-code and deployment tool for Roblox.

Mantle ?? An infrastructure-as-code and deployment tool for Roblox. ⚠ Please note that this is an early release and the API is unstable. Releases foll

Blake Mealey 44 Dec 22, 2022
Desktop launcher to install and use Holochain apps locally

Holochain Launcher A cross-platform executable that launches a local Holochain conductor, and installs and opens apps. Feedback is immensely welcome i

Holochain 58 Dec 30, 2022
Tool to monitor the statistics and the energy consumption of docker containers

Docker Activity Docker activity is a tool to monitor the statistics of your containers and output their energy consumption. Warning It's still in earl

Jérémie Drouet 39 Dec 6, 2022
Qovery Engine is an open-source abstraction layer library that turns easy apps deployment on AWS, GCP, Azure, and other Cloud providers in just a few minutes.

Qovery Engine is an open-source abstraction layer library that turns easy apps deployment on AWS, GCP, Azure, and other Cloud providers in just a few minutes.

Qovery 1.9k Jan 4, 2023
Runc - CLI tool for spawning and running containers according to the OCI specification

runc Introduction runc is a CLI tool for spawning and running containers on Linux according to the OCI specification. Releases You can find official r

Open Container Initiative 9.9k Jan 5, 2023