Rudr
The new project will come to its first release at Q4 2020. Stay tuned!
The original README can be found here for historical purposes.
The new project will come to its first release at Q4 2020. Stay tuned!
The original README can be found here for historical purposes.
Hello,
As part of our prototyping work, we got Scylla working for Linux/arm64 with a private image. It would be nice if we could get official arm64 image produced side by side with the amd64 one. I would be happy to help to enable this provided you could give me permission to do so.
Type: EnhancementHi, This is my ApplicationConfiguration:
apiVersion: core.oam.dev/v1alpha1
kind: ApplicationConfiguration
metadata:
annotations:
description: Hello World App
kubectl.kubernetes.io/last-applied-configuration: |
{"apiVersion":"core.oam.dev/v1alpha1","kind":"ApplicationConfiguration","metadata":{"annotations":{"description":"Hello World App","version":"v1.0.0"},"name":"hello-app","namespace":"default"},"spec":{"components":[{"componentName":"helloworld","instanceName":"myhelloapp","traits":[{"name":"ingress","parameterValues":[{"name":"hostname","value":"helloworld.com"},{"name":"path","value":"/"},{"name":"service_port","value":9090}]}]}],"variables":null}}
version: v1.0.0
creationTimestamp: "2019-10-31T11:43:07Z"
generation: 1
name: hello-app
namespace: default
resourceVersion: "46968"
selfLink: /apis/core.oam.dev/v1alpha1/namespaces/default/applicationconfigurations/hello-app
uid: 9b583805-fbd3-11e9-a7ed-0800274110f1
spec:
components:
- componentName: helloworld
instanceName: myhelloapp
traits:
- name: ingress
parameterValues:
- name: hostname
value: helloworld.com
- name: path
value: /
- name: service_port
value: 9090
variables: null
This refers to an already deployed component helloworld
. However deploying this ApplicationConfiguration does not do anything. Am using rudr version v1.0.0-alpha.1.
One thing I noted is That the status
is missing in when I do kubectl get configurations hello-app -o yaml
.
Is there anything wrong with what I have done? Or is this a known issue?
Type: Bug Status: CompletedI was able to successfully install scylla in AKS but after scaling the cluster none of the components/configuration installation was successful. Because of that, I have deleted the deployment using helm and tried to install it again. But i'm stuck here with the below error. I had to manually remove the crd as it was deleting as part of the helm delete scylla. Thanks for your help.
helm install --name scylla ./charts/scylla --wait Error: validation failed: unable to recognize "": no matches for kind "Trait" in version "core.hydra.io/v1alpha1"
kubectl get trait error: the server doesn't have a resource type "trait"
Built a fresh Scylla image and deployed it. When I apply component
and configuration
yaml, no additional resources get produced and the log is not indicating any failures:
# kubectl logs -f pod/scylla-7874d8844b-kzftb
[2019-10-01T01:51:45Z INFO scylla] starting server
[2019-10-01T01:51:45Z INFO scylla] Watcher is running
[2019-10-01T01:51:45Z INFO scylla] Health server is running on 0.0.0.0:8080
Also noticed the build ran longer compared to last time (this is especially evident for the arm64 build). Are there some changes in progress and is the current broken state expected?`
Removes unused imports as well as any code that isn't used anywhere in scylla. Removes all the compiler warnings when running cargo build
or cargo test
.
Signed-off-by: Matthew Fisher [email protected]
I upgraded to helm 3 (i think I did it right?) but now I'm getting an error when I try and install the charts.
Error: failed pre-install: unable to decode "": no kind "CustomResourceDefinition" is registered for version "apiextensions.k8s.io/v1beta1" in scheme "k8s.io/client-go/kubernetes/scheme/register.go:65"
Rias-MacBook-Pro:scylla riabhatia$ kubectl version
Client Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.0", GitCommit:"e8462b5b5dc2584fdcd18e6bcfe9f1e4d970a529", GitTreeState:"clean", BuildDate:"2019-06-20T04:49:16Z", GoVersion:"go1.12.6", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:41:47Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}
Rias-MacBook-Pro:scylla riabhatia$ helm version
version.BuildInfo{Version:"v3.0.0-alpha.2", GitCommit:"97e7461e41455e58d89b4d7d192fed5352001d44", GitTreeState:"clean", GoVersion:"go1.12.7"}```
For the October milestone, it would be great to have a user-friendly README that has the following:
/cc @flynnduism
Type: Docs Status: CompletedI'm trying the first app and also tried other e.g but none of them is creating an external IP in my AKS. Is there a default ingress controller for hydra that I can use like Azure Dev Space provides?
Any comment on how to access the app published via scylla outside the cluster would helpful.
$ kubectl get configuration,pod,svc,ingress
NAME AGE
first-app 28s
NAME READY STATUS RESTARTS AGE
first-app-nginx-singleton 1/1 Running 0 19s
NAME TYPE CLUSTER-IP EXTERNAL-IP PORT(S) AGE
first-app-nginx-singleton ClusterIP 10.0.78.193 <none> 80/TCP 19s
kubernetes ClusterIP 10.0.0.1 <none> 443/TCP 95d
NAME HOSTS ADDRESS PORTS AGE
first-app-nginx-singleton-trait-ingress example.com 80 19s
For post-processing of operational configurations, there are a few places where the syntax [fromVariable(NAME)]
should result in a substitution of a variable name into a value. This is described here: https://github.com/microsoft/hydra-spec/blob/master/6.operational_configuration.md#properties
The first cut of the Autoscaler trait only supports CPU metrics. Of course, there are tons of others supported by Kubernetes. We need to support those as well.
Type: EnhancementHere's an initial concepts doc for scopes (Issue #211). Some open questions I have from @wonderflow's Health Scope Controller walkthrough:
I'm getting this error when I try to helm install healthscope ./charts/healthscope
: Error: apiVersion "apps/v1beta2" in healthscope/templates/deployment.yaml is not available . If I change the apiVersion to e.g., apps/v1
it (I think?) installs fine. I also tried deleting rudr/crds and reinstalling the latest (w/o specifying version tag), but same results.
After applying health-scope-config.yaml, the output of kubectl get health
is No resources found. Yet the output of kubectl get scopes
lists both health and network, and my-health-scope is listed from kubectl get configurations
.
Is network scope actually implemented in Rudr yet? (Should that be documented w/this PR?)
What are the best practices we should include for:
Thanks!
Describe the bug Ingress trait and ComponentSchematic documentation are not clear on some things about using Ingress, such as:
Specifically, I'm trying to recreate the Istio BookInfo demo as a Rudr app definition. I can easily create a Kubernetes Ingress to do what's desired, as below:
apiVersion: networking.k8s.io/v1beta1
kind: Ingress
metadata:
name: bookinfo
annotations:
nginx.ingress.kubernetes.io/ssl-redirect: "false"
spec:
rules:
- http:
paths:
- path: /login
pathType: Exact
backend:
serviceName: productpage
servicePort: 9080
- path: /logout
pathType: Exact
backend:
serviceName: productpage
servicePort: 9080
- path: /productpage
pathType: Exact
backend:
serviceName: productpage
servicePort: 9080
- path: /static
pathType: Prefix
backend:
serviceName: productpage
servicePort: 9080
- path: /api/v1/products
pathType: Prefix
backend:
serviceName: productpage
servicePort: 9080
but the BookInfo app depends on various Services existing as well, some of which (reviews) have selectors that select pods from multiple Deployments. It's not clear whether I can just define them as separate ComponentSchematics and then reference them all from a single ApplicationConfig somehow, or if I need to do something different, or if this is not actually (currently) possible to define in OAM and/or Rudr.
Type: BugOutput of helm version:
version.BuildInfo{Version:"v3.1.2", GitCommit:"d878d4d45863e42fd5cff6743294a11d28a9abce", GitTreeState:"clean", GoVersion:"go1.13.8"}
Output of kubectl version:
Client Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.2-41+b5cdb79a4060a3", GitCommit:"b5cdb79a4060a307d0c8a56a128aadc0da31c5a2", GitTreeState:"clean", BuildDate:"2020-04-27T17:29:53Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"18+", GitVersion:"v1.18.2-41+b5cdb79a4060a3", GitCommit:"b5cdb79a4060a307d0c8a56a128aadc0da31c5a2", GitTreeState:"clean", BuildDate:"2020-04-27T17:31:24Z", GoVersion:"go1.14.2", Compiler:"gc", Platform:"linux/amd64"}
Cloud Provider/Platform (AKS, GKE, Minikube etc.): microk8s
Describe the bug Rudr hello-world does not deploy on Kubernetes 1.18
OAM yaml files used
Rudr sample app from commit 8e9b9df7cc8a087582ca767aef4cd4d99fd99c87
(helloworld-python-component.yaml
and first-app-config.yaml
)
What happened:
No application resources started up. Logs from the Rudr pod showed:
2020-04-30 06:46:16 INFO [rudr::instigator:src/instigator.rs:794] UID: ed939316-7ada-4ee6-9a3b-81183c159ecd
2020-04-30 06:46:16 INFO [rudr::instigator:src/instigator.rs:602] MainControlLoop: Looking up workload for first-app <helloworld-python-v1>
2020-04-30 06:46:16 INFO [rudr::instigator:src/instigator.rs:412] MainControlLoop: Adding component helloworld-python-v1
2020-04-30 06:46:16 INFO [rudr::schematic::component:src/schematic/component.rs:188] Looking for image pull secret
2020-04-30 06:46:16 ERROR [rudr::trait_manager:src/trait_manager.rs:118] Trait phase Add failed for first-app: Error { inner:
ApiError ("admission webhook \"validate.nginx.ingress.kubernetes.io\" denied the request: \n-------------------------------------------------------------------------------\nError: exit status 1\n2020/04/30 06:46:16 [emerg] 1253#1253: duplicate location \"/\" in /tmp/nginx-cfg589181491:1123\nnginx: [emerg] duplicate location \"/\" in /tmp/nginx-cfg589181491:1123\nnginx: configuration file /tmp/nginx-cfg589181491 test failed\n\n-------------------------------------------------------------------------------\n") }
2020-04-30 06:46:16 INFO [rudr:src/main.rs:118] Handled event
2020-04-30 06:46:17 INFO [rudr:src/main.rs:30] Loading in-cluster config
2020-04-30 06:46:17 INFO [rudr::instigator:src/instigator.rs:602] StatusCheckLoop: Looking up workload for first-app <helloworld-python-v1>
2020-04-30 06:46:17 WARN [rudr::schematic::traits::ingress:src/schematic/traits/ingress.rs:138] Ingress not found ApiError NotFound ("ingresses.extensions \"first-app-helloworld-python-v1-trait-ingress\" not found"). Recreating ...
followed by repeated looping of the last three messages.
Maybe because of the deprecation of extensions/v1beta Ingress? I know the docs say only 1.15 and 1.16 are supported and this is 1.18. I have a 1.15 EKS cluster I can try this on tomorrow.
What you expected to happen:
Normal demo app startup.
How to reproduce it (as minimally and precisely as possible):
OAM Spec Info
in the OAM Rudr spec for SingletonTask and Task workload type, there has nowhere to specify the restart onfailure policy and backoff timer.
Is your feature request related to a problem? Please describe. Cause Task and SingletonTask is mapping to K8S jobs, but for the detailed parameter in K8S jobs is not supported in OAM task components.
Describe the solution you'd like can specify the restart poliy and backoff time parameter in components
Describe alternatives you've considered A clear and concise description of any alternative solutions or features you've considered.
Additional context Add any other context or screenshots about the feature request here.
Output of helm version:
Output of kubectl version: Client Version: version.Info{Major:"1", Minor:"16", GitVersion:"v1.16.0", GitCommit:"2bd9643cee5b3b3a5ecbd3af49d09018f0773c77", GitTreeState:"clean", BuildDate:"2019-09-18T14:36:53Z", GoVersion:"go1.12.9", Compiler:"gc", Platform:"linux/amd64"} Server Version: version.Info{Major:"1", Minor:"15", GitVersion:"v1.15.7", GitCommit:"c83d931fb9bece427bc63a02349755e0f8696d3e", GitTreeState:"clean", BuildDate:"2020-01-31T20:09:49Z", GoVersion:"go1.12.12", Compiler:"gc", Platform:"linux/amd64"} Cloud Provider/Platform (AKS, GKE, Minikube etc.): AKS
Describe the bug The fromVariable() in application config not work
OAM yaml files used apiVersion: core.oam.dev/v1alpha1 kind: ApplicationConfiguration metadata: name: example-var-task spec: variables: - name: DEMO value: HelloWorld components: - componentName: helloworld-python-v1 instanceName: one-alpine-var-task parameterValues: - name: demo value: "[fromVariable(DEMO)]"
What happened: kubectl logs one-alpine-var-task-g4mjf [fromVariable(DEMO)]
the the variable not take into the pod.
What you expected to happen: the variable should take into the pod
Relevant screenshots:
How to reproduce it (as minimally and precisely as possible):
Anything else we need to know:
Type: BugOutput of helm version: version.BuildInfo{Version:"v3.0.3", GitCommit:"ac925eb7279f4a6955df663a0128044a8a6b7593", GitTreeState:"clean", GoVersion:"go1.13.6"} Output of kubectl version: Client Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:20:10Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"darwin/amd64"} Server Version: version.Info{Major:"1", Minor:"17", GitVersion:"v1.17.0", GitCommit:"70132b0f130acc0bed193d9ba59dd186f0e634cf", GitTreeState:"clean", BuildDate:"2019-12-07T21:12:17Z", GoVersion:"go1.13.4", Compiler:"gc", Platform:"linux/amd64"} Cloud Provider/Platform (AKS, GKE, Minikube etc.): mini kube
Describe the bug use oam-go-sdk create Component Schematic, If ComponentSchematic WorkloadSettings was not initialized, report an error
What you expected to happen: No errors occur
logs: "port","type":"string"}],"workloadSettings":null,"workloadType":"core.oam.dev/v1alpha1.Server"},"status":{}} , Error("invalid type: null, expected a sequence", line: 1, column: 840)
Type: BugThis marks the first release of Rudr, an implementation of the Open Application Model specification. Rudr is implemented atop Kubernetes (>= 1.14), and provides implementations of Components, Application Configurations, and Traits.
Want to join in? Follow the Getting Started Guide to get going. Try out some of the examples. As you find bugs, don't hesitate to open some new issues. And we :heart: PRs! There's plenty of work to do in this new Rust codebase.
Make sure you read the specification, too! We're doing our best to faithfully implement the spec, but we still have a ways to go.
Source code(tar.gz)Herd Herd was a small side project in building a HTTP load testing application in Rust with a main focus on being easy to use and low on OS level depe
Dragit helps you share files between computers in the same network.
BoringTun BoringTun is an implementation of the WireGuard® protocol designed for portability and speed. BoringTun is successfully deployed on millions
Layered bitsets This crates contains implementation of bitsets with layered structure, allowing fast operations that benifit from skipping large empty
Agreed Fork of async-raft, the Tokio-based Rust implementation of the Raft distributed consensus protocol. Agreed is an implementation of the Raft con
H2O Kubernetes Repository with official tools to aid the deployment of H2O Machine Learning platform to Kubernetes. There are two essential tools to b
What is Tight Model Format The main goal of the tmf project is to provide a way to save 3D game assets compressed in such a way, that there are no not
AI-TOML Workflow Specification (aiTWS) The AI-TOML Workflow Specification (aiTWS) is a flexible and extensible specification for defining arbitrary wo
ANISE provides an open-source and open-governed library and algorithmic specification for most computations for astrodynamics. It is heavily inspired by NAIF SPICE, and may be considered as an open-source modern rewrite of SPICE.
wasmcloud-operator An operator for managing a set of wasmCloud hosts running on Kubernetes and manage wasmCloud applications using wadm. The goal is t
rMiniK8s A simple dockerized application management system like Kubernetes, written in Rust, plus a simple FaaS implementation. Course Project for SJT
Ktray is written in Rust and React, with Tauri framework. The app simplifies the process of starting and stopping multiple port forwarding configurations through a user-friendly interface.
Introduction This is an OPC UA server / client API implementation for Rust. Linux Windows OPC UA is an industry standard for monitoring of data. It's
[wip] webcrypto Implementation of the Web Cryptography specification in Rust. This crate hopes to ease interoperability between WASM and native target
import_map A Rust implementation of WICG Import Maps specification. This crates is used in Deno project. The implementation is tested against WPT test
Implementation of the WebUSB specification in Rust.
janus Janus is an experimental implementation of the Privacy Preserving Measurement (PPM) specification. It is currently in active development. Runnin
CT Merkle This is an implementation of the append-only log described in the Certificate Transparency specification (RFC 6962). The log is a Merkle tre
freee-rs REST API client implementation for freee, auto-generated from OpenAPI specification. Getting Started Add to your Cargo.toml as follows: [depe
obj-to-html is a small application that converts a 3D model in .obj format into HTML and CSS that will display that model in a web browser, spinning a