Bottlerocket - An operating system designed for hosting containers

Overview

Bottlerocket OS

Welcome to Bottlerocket!

Bottlerocket is a free and open-source Linux-based operating system meant for hosting containers.

If you’re ready to jump right in, read one of our setup guides for running Bottlerocket in Amazon EKS, Amazon ECS, or VMware.

Bottlerocket focuses on security and maintainability, providing a reliable, consistent, and safe platform for container-based workloads. This is a reflection of what we've learned building operating systems and services at Amazon. You can read more about what drives us in our charter.

The base operating system has just what you need to run containers reliably, and is built with standard open-source components. Bottlerocket-specific additions focus on reliable updates and on the API. Instead of making configuration changes manually, you can change settings with an API call, and these changes are automatically migrated through updates.

Some notable features include:

Contact us

If you find a security issue, please contact our security team rather than opening an issue.

If you're interested in contributing, thank you! Please see our contributor's guide.

We use GitHub issues to track other bug reports and feature requests. You can look at existing issues to see whether your concern is already known.

If not, you can select from a few templates and get some guidance on the type of information that would be most helpful. Contact us with a new issue here.

If you just have questions about Bottlerocket, please feel free to start or join a discussion.

We don't have other communication channels set up quite yet, but don't worry about making an issue or a discussion thread! You can let us know about things that seem difficult, or even ways you might like to help.

Variants

To start, we're focusing on the use of Bottlerocket as a host OS in AWS EKS Kubernetes clusters and Amazon ECS clusters. We’re excited to get early feedback and to continue working on more use cases!

Bottlerocket is architected such that different cloud environments and container orchestrators can be supported in the future. A build of Bottlerocket that supports different features or integration characteristics is known as a 'variant'. The artifacts of a build will include the architecture and variant name. For example, an x86_64 build of the aws-k8s-1.21 variant will produce an image named bottlerocket-aws-k8s-1.21-x86_64-<version>-<commit>.img.

The following variants support EKS, as described above:

  • aws-k8s-1.18
  • aws-k8s-1.19
  • aws-k8s-1.20
  • aws-k8s-1.21

The following variant supports ECS:

  • aws-ecs-1

We also have variants in preview status that are designed to be Kubernetes worker nodes in VMware:

  • vmware-k8s-1.20
  • vmware-k8s-1.21

The aws-k8s-1.15, aws-k8s-1.16, and aws-k8s-1.17 variants are no longer supported. We recommend users replace nodes running these variants with the latest variant compatible with their cluster.

Architectures

Our supported architectures include x86_64 and aarch64 (written as arm64 in some contexts).

Setup

🚶 🏃

Bottlerocket is best used with a container orchestrator. To get started with Kubernetes in Amazon EKS, please see QUICKSTART-EKS. To get started with Kubernetes in VMware, please see QUICKSTART-VMWARE. To get started with Amazon ECS, please see QUICKSTART-ECS. These guides describe:

  • how to set up a cluster with the orchestrator, so your Bottlerocket instance can run containers
  • how to launch a Bottlerocket instance in EC2 or VMware

To build your own Bottlerocket images, please see BUILDING. It describes:

  • how to build an image
  • how to register an EC2 AMI from an image

To publish your built Bottlerocket images, please see PUBLISHING. It describes:

  • how to make TUF repos including your image
  • how to copy your AMI across regions
  • how to mark your AMIs public or grant access to specific accounts
  • how to make your AMIs discoverable using SSM parameters

Exploration

To improve security, there's no SSH server in a Bottlerocket image, and not even a shell.

Don't panic!

There are a couple out-of-band access methods you can use to explore Bottlerocket like you would a typical Linux system. Either option will give you a shell within Bottlerocket. From there, you can change settings, manually update Bottlerocket, debug problems, and generally explore.

Note: These methods require that your instance has permission to access the ECR repository where these containers live; the appropriate policy to add to your instance's IAM role is AmazonEC2ContainerRegistryReadOnly.

Control container

Bottlerocket has a "control" container, enabled by default, that runs outside of the orchestrator in a separate instance of containerd. This container runs the AWS SSM agent that lets you run commands, or start shell sessions, on Bottlerocket instances in EC2. (You can easily replace this control container with your own just by changing the URI; see Settings.)

In AWS, you need to give your instance the SSM role for this to work; see the setup guide. Outside of AWS, you can use AWS Systems Manager for hybrid environments. There's more detail about hybrid environments in the control container documentation.

Once the instance is started, you can start a session:

  • Go to AWS SSM's Session Manager
  • Select "Start session" and choose your Bottlerocket instance
  • Select "Start session" again to get a shell

If you prefer a command-line tool, you can start a session with a recent AWS CLI and the session-manager-plugin. Then you'd be able to start a session using only your instance ID, like this:

aws ssm start-session --target INSTANCE_ID

With the default control container, you can make API calls to configure and manage your Bottlerocket host. To do even more, read the next section about the admin container. If you've enabled the admin container, you can access it from the control container like this:

apiclient exec admin bash

Admin container

Bottlerocket has an administrative container, disabled by default, that runs outside of the orchestrator in a separate instance of containerd. This container has an SSH server that lets you log in as ec2-user using your EC2-registered SSH key. Outside of AWS, you can pass in your own SSH keys. (You can easily replace this admin container with your own just by changing the URI; see Settings.

To enable the container, you can change the setting in user data when starting Bottlerocket, for example EC2 instance user data:

[settings.host-containers.admin]
enabled = true

If Bottlerocket is already running, you can enable the admin container from the default control container like this:

enable-admin-container

If you're using a custom control container, or want to make the API calls directly, you can enable the admin container like this instead:

apiclient set host-containers.admin.enabled=true

Once you've enabled the admin container, you can either access it through SSH or from the control container like this:

apiclient exec admin bash

Once you're in the admin container, you can run sheltie to get a full root shell in the Bottlerocket host. Be careful; while you can inspect and change even more as root, Bottlerocket's filesystem and dm-verity setup will prevent most changes from persisting over a restart - see Security.

Updates

Rather than a package manager that updates individual pieces of software, Bottlerocket downloads a full filesystem image and reboots into it. It can automatically roll back if boot failures occur, and workload failures can trigger manual rollbacks.

The update process uses images secured by TUF. For more details, see the update system documentation.

Update methods

There are several ways of updating your Bottlerocket hosts. We provide tools for automatically updating hosts, as well as an API for direct control of updates.

Automated updates

For EKS variants of Bottlerocket, we recommend using the Bottlerocket update operator for automated updates.

For the ECS variant of Bottlerocket, we recommend using the Bottlerocket ECS updater for automated updates.

Update API

The Bottlerocket API includes methods for checking and starting system updates. You can read more about the update APIs in our update system documentation.

apiclient knows how to handle those update APIs for you, and you can run it from the control or admin containers.

To see what updates are available:

apiclient update check

If an update is available, it will show up in the chosen_update field. The available_updates field will show the full list of available versions, including older versions, because Bottlerocket supports safely rolling back.

To apply the latest update:

apiclient update apply

The next time you reboot, you'll start up in the new version, and system configuration will be automatically migrated. To reboot right away:

apiclient reboot

If you're confident about updating, the apiclient update apply command has --check and --reboot flags to combine the above actions, so you can accomplish all of the above steps like this:

apiclient update apply --check --reboot

See the apiclient documentation for more details.

Update rollback

The system will automatically roll back if it's unable to boot. If the update is not functional for a given container workload, you can do a manual rollback:

signpost rollback-to-inactive
reboot

This doesn't require any external communication, so it's quicker than apiclient, and it's made to be as reliable as possible.

Settings

Here we'll describe the settings you can configure on your Bottlerocket instance, and how to do it.

(API endpoints are defined in our OpenAPI spec if you want more detail.)

Interacting with settings

Using the API client

You can see the current settings with an API request:

apiclient get settings

This will return all of the current settings in JSON format. For example, here's an abbreviated response:

{"motd":"...", {"kubernetes": ...}}

You can change settings like this:

apiclient set motd="hi there" kubernetes.node-labels.environment=test

You can also use a JSON input mode to help change many related settings at once, and a "raw" mode if you want more control over how the settings are committed and applied to the system. See the apiclient README for details.

Using user data

If you know what settings you want to change when you start your Bottlerocket instance, you can send them in the user data.

In user data, we structure the settings in TOML form to make things a bit simpler. Here's the user data to change the message of the day setting, as we did in the last section:

[settings]
motd = "my own value!"

If your user data is over the size limit of the platform (e.g. 16KiB for EC2) you can compress the contents with gzip. (With aws-cli, you can use --user-data fileb:///path/to/gz-file to pass binary data.)

Description of settings

Here we'll describe each setting you can change.

Note: You can see the default values (for any settings that are not generated at runtime) by looking in the defaults.d directory for a variant, for example aws-ecs-1.

When you're sending settings to the API, or receiving settings from the API, they're in a structured JSON format. This allows modification of any number of keys at once. It also lets us ensure that they fit the definition of the Bottlerocket data model - requests with invalid settings won't even parse correctly, helping ensure safety.

Here, however, we'll use the shortcut "dotted key" syntax for referring to keys. This is used in some API endpoints with less-structured requests or responses. It's also more compact for our needs here.

In this format, "settings.kubernetes.cluster-name" refers to the same key as in the JSON {"settings": {"kubernetes": {"cluster-name": "value"}}}.

Top-level settings

  • settings.motd: This setting is just written out to /etc/motd. It's useful as a way to get familiar with the API! Try changing it.

Kubernetes settings

See the EKS setup guide for much more detail on setting up Bottlerocket and Kubernetes in AWS EKS. For more details about running Bottlerocket as a Kubernetes worker node in VMware, see the VMware setup guide.

The following settings must be specified in order to join a Kubernetes cluster. You should specify them in user data.

  • settings.kubernetes.cluster-certificate: This is the base64-encoded certificate authority of the cluster.
  • settings.kubernetes.api-server: This is the cluster's Kubernetes API endpoint.

For Kubernetes variants in AWS, you must also specify:

  • settings.kubernetes.cluster-name: The cluster name you chose during setup; the setup guide uses "bottlerocket".

For Kubernetes variants in VMware, you must specify:

  • settings.kubernetes.cluster-dns-ip: The IP of the DNS service running in the cluster.
  • settings.kubernetes.bootstrap-token: The token used for TLS bootstrapping.

The following settings can be optionally set to customize the node labels and taints. Remember to quote keys (since they often contain ".") and to quote all values.

  • settings.kubernetes.node-labels: Labels in the form of key, value pairs added when registering the node in the cluster.
  • settings.kubernetes.node-taints: Taints in the form of key, value and effect entries added when registering the node in the cluster.
    • Example user data for setting up labels and taints:
      [settings.kubernetes.node-labels]
      "label1" = "foo"
      "label2" = "bar"
      [settings.kubernetes.node-taints]
      "dedicated" = "experimental:PreferNoSchedule"
      "special" = "true:NoSchedule"
      

The following settings are optional and allow you to further configure your cluster.

  • settings.kubernetes.cluster-domain: The DNS domain for this cluster, allowing all Kubernetes-run containers to search this domain before the host's search domains. Defaults to cluster.local.
  • settings.kubernetes.standalone-mode: Whether to run the kubelet in standalone mode, without connecting to an API server. Defaults to false.
  • settings.kubernetes.cloud-provider: The cloud provider for this cluster. Defaults to aws for AWS variants, and external for other variants.
  • settings.kubernetes.authentication-mode: Which authentication method the kubelet should use to connect to the API server, and for incoming requests. Defaults to aws for AWS variants, and tls for other variants.
  • settings.kubernetes.server-tls-bootstrap: Enables or disables server certificate bootstrap. When enabled, the kubelet will request a certificate from the certificates.k8s.io API. This requires an approver to approve the certificate signing requests (CSR). Defaults to true.
  • settings.kubernetes.bootstrap-token: The token to use for TLS bootstrapping. This is only used with the tls authentication mode, and is otherwise ignored.
  • settings.kubernetes.eviction-hard: The signals and thresholds that trigger pod eviction. Remember to quote signals (since they all contain ".") and to quote all values.
    • Example user data for setting up eviction hard:
      [settings.kubernetes.eviction-hard]
      "memory.available" = "15%"
      
  • settings.kubernetes.allowed-unsafe-sysctls: Enables specified list of unsafe sysctls.
    • Example user data for setting up allowed unsafe sysctls:
      allowed-unsafe-sysctls = ["net.core.somaxconn", "net.ipv4.ip_local_port_range"]
      
  • settings.kubernetes.system-reserved: Resources reserved for system components.
    • Example user data for setting up system reserved:
      [settings.kubernetes.system-reserved]
      cpu = "10m"
      memory = "100Mi"
      ephemeral-storage= "1Gi"
      
  • settings.kubernetes.registry-qps: The registry pull QPS.
  • settings.kubernetes.registry-burst: The maximum size of bursty pulls.
  • settings.kubernetes.event-qps: The maximum event creations per second.
  • settings.kubernetes.event-burst: The maximum size of a burst of event creations.
  • settings.kubernetes.kube-api-qps: The QPS to use while talking with kubernetes apiserver.
  • settings.kubernetes.kube-api-burst: The burst to allow while talking with kubernetes.
  • settings.kubernetes.container-log-max-size: The maximum size of container log file before it is rotated.
  • settings.kubernetes.container-log-max-files: The maximum number of container log files that can be present for a container.
  • settings.kubernetes.cpu-manager-policy: Specifies the CPU manager policy. Possible values are static and none. Defaults to none. If you want to allow pods with certain resource characteristics to be granted increased CPU affinity and exclusivity on the node, you can set this setting to static. You should reboot if you change this setting after startup - try apiclient reboot.
  • settings.kubernetes.cpu-manager-reconcile-period: Specifies the CPU manager reconcile period, which controls how often updated CPU assignments are written to cgroupfs. The value is a duration like 30s for 30 seconds or 1h5m for 1 hour and 5 minutes.
  • settings.kubernetes.topology-manager-policy: Specifies the topology manager policy. Possible values are none, restricted, best-effort, and single-numa-node. Defaults to none.
  • settings.kubernetes.topology-manager-scope: Specifies the topology manager scope. Possible values are container and pod. Defaults to container. If you want to group all containers in a pod to a common set of NUMA nodes, you can set this setting to pod.

You can also optionally specify static pods for your node with the following settings. Static pods can be particularly useful when running in standalone mode.

  • settings.kubernetes.static-pods.<custom identifier>.manifest: A base64-encoded pod manifest.
  • settings.kubernetes.static-pods.<custom identifier>.enabled: Whether the static pod is enabled.

For Kubernetes variants in AWS and VMware, the following are set for you automatically, but you can override them if you know what you're doing! In AWS, pluto sets these based on runtime instance information. In VMware, Bottlerocket uses netdog (for node-ip) or relies on default values.

  • settings.kubernetes.node-ip: The IP address of this node.
  • settings.kubernetes.pod-infra-container-image: The URI of the "pause" container.
  • settings.kubernetes.kube-reserved: Resources reserved for node components.
    • Bottlerocket provides default values for the resources by schnauzer:
      • cpu: in millicores from the total number of vCPUs available on the instance.
      • memory: in mebibytes from the max num of pods on the instance. memory_to_reserve = max_num_pods * 11 + 255.
      • ephemeral-storage: defaults to 1Gi.

For Kubernetes variants in AWS, the following settings are set for you automatically by pluto.

  • settings.kubernetes.max-pods: The maximum number of pods that can be scheduled on this node (limited by number of available IPv4 addresses)
  • settings.kubernetes.cluster-dns-ip: Derived from the EKS Service IP CIDR or the CIDR block of the primary network interface.

Amazon ECS settings

See the setup guide for much more detail on setting up Bottlerocket and ECS.

The following settings are optional and allow you to configure how your instance joins an ECS cluster. Since joining a cluster happens at startup, they need to be specified in user data.

  • settings.ecs.cluster: The name or ARN of your Amazon ECS cluster. If left unspecified, Bottlerocket will join your default cluster.
  • settings.ecs.instance-attributes: Attributes in the form of key, value pairs added when registering the container instance in the cluster.
    • Example user data for setting up attributes:
      [settings.ecs.instance-attributes]
      attribute1 = "foo"
      attribute2 = "bar"
      

The following settings are optional and allow you to further configure your cluster. These settings can be changed at any time.

  • settings.ecs.logging-drivers: The list of logging drivers available on the container instance. The ECS agent running on a container instance must register available logging drivers before tasks that use those drivers are eligible to be placed on the instance. Bottlerocket enables the json-file, awslogs, and none drivers by default.
  • settings.ecs.allow-privileged-containers: Whether launching privileged containers is allowed on the container instance. If this value is set to false, privileged containers are not permitted. Bottlerocket sets this value to false by default.
  • settings.ecs.loglevel: The level of verbosity for the ECS agent's logs. Supported values are debug, info, warn, error, and crit, and the default is info.
  • settings.ecs.enable-spot-instance-draining: If the instance receives a spot termination notice, the agent will set the instance's state to DRAINING, so the workload can be moved gracefully before the instance is removed. Defaults to false.

OCI Hooks settings

Bottlerocket allows you to opt-in to use additional OCI hooks for your orchestrated containers. Once you opt-in to use additional OCI hooks, any new orchestrated containers will be configured with them, but existing containers won't be changed.

  • settings.oci-hooks.log4j-hotpatch-enabled: Enables the hotdog OCI hooks, which are used to inject the Log4j Hot Patch into containers. Defaults to false.

Container image registry settings

The following setting is optional and allows you to configure image registry mirrors and pull-through caches for your containers.

  • settings.container-registry.mirrors: An array of container image registry mirror settings. Each element specifies the registry and the endpoints for said registry. When pulling an image from a registry, the container runtime will try the endpoints one by one and use the first working one. (Docker and containerd will still try the default registry URL if the mirrors fail.)
    • Example user data for setting up image registry mirrors:
    [[settings.container-registry.mirrors]]
    registry = "*"
    endpoint = ["https://<example-mirror>","https://<example-mirror-2>"]
    
    [[settings.container-registry.mirrors]]
    registry = "docker.io"
    endpoint = [ "https://<my-docker-hub-mirror-host>", "https://<my-docker-hub-mirror-host-2>"]
    
    If you use a Bottlerocket variant that uses Docker as the container runtime, like aws-ecs-1, you should be aware that Docker only supports pull-through caches for images from Docker Hub (docker.io). Mirrors for other registries are ignored in this case.

For host-container and bootstrap-container images from Amazon ECR private repositories, registry mirrors are currently unsupported.

Updates settings

  • settings.updates.metadata-base-url: The common portion of all URIs used to download update metadata.
  • settings.updates.targets-base-url: The common portion of all URIs used to download update files.
  • settings.updates.seed: A u32 value that determines how far into the update schedule this machine will accept an update. We recommend leaving this at its default generated value so that updates can be somewhat randomized in your cluster.
  • settings.updates.version-lock: Controls the version that will be selected when you issue an update request. Can be locked to a specific version like v1.0.0, or latest to take the latest available version. Defaults to latest.
  • settings.updates.ignore-waves: Updates are rolled out in waves to reduce the impact of issues. For testing purposes, you can set this to true to ignore those waves and update immediately.

Network settings

  • settings.network.hostname: The desired hostname of the system. Important note for all Kubernetes variants: Changing this setting at runtime (not via user data) can cause issues with kubelet registration, as hostname is closely tied to the identity of the system for both registration and certificates/authorization purposes.

Most users don't need to change this setting as the following defaults work for the majority of use cases. If this setting isn't set we attempt to use DNS reverse lookup for the hostname. If the lookup is unsuccessful, the IP of the node is used.

Proxy settings

These settings will configure the proxying behavior of the following services:

The no-proxy list will automatically include entries for localhost.

If you're running a Kubernetes variant, the no-proxy list will automatically include the Kubernetes API server endpoint and other commonly used Kubernetes DNS suffixes to facilitate intra-cluster networking.

Metrics settings

By default, Bottlerocket sends anonymous metrics when it boots, and once every six hours. This can be disabled by setting send-metrics to false. Here are the metrics settings:

  • settings.metrics.metrics-url: The endpoint to which metrics will be sent. The default is https://metrics.bottlerocket.aws/v1/metrics.
  • settings.metrics.send-metrics: Whether Bottlerocket will send anonymous metrics.
  • settings.metrics.service-checks: A list of systemd services that will be checked to determine whether a host is healthy.

Time settings

  • settings.ntp.time-servers: A list of NTP servers used to set and verify the system time.

Kernel settings

  • settings.kernel.lockdown: This allows further restrictions on what the Linux kernel will allow, for example preventing the loading of unsigned modules. May be set to "none" (the default in older variants, up through aws-k8s-1.19), "integrity" (the default for newer variants), or "confidentiality". Important note: this setting cannot be lowered (toward 'none') at runtime. You must reboot for a change to a lower level to take effect.
  • settings.kernel.sysctl: Key/value pairs representing Linux kernel parameters. Remember to quote keys (since they often contain ".") and to quote all values.
    • Example user data for setting up sysctl:
      [settings.kernel.sysctl]
      "user.max_user_namespaces" = "16384"
      "vm.max_map_count" = "262144"
      

Custom CA certificates settings

By default, Bottlerocket ships with the Mozilla CA certificate store, but you can add self-signed certificates through the API using these settings:

  • settings.pki.<bundle-name>.data: Base64-encoded PEM-formatted certificates bundle; it can contain more than one certificate
  • settings.pki.<bundle-name>.trusted: Whether the certificates in the bundle are trusted; defaults to false when not provided

Here's an example of adding a bundle of self-signed certificates as user data:

[settings.pki.my-trusted-bundle]
data="W3N..."
trusted=true

[settings.pki.dont-trust-these]
data="W3N..."
trusted=false

Here's the same example but using API calls:

apiclient set \
  pki.my-trusted-bundle.data="W3N..." \
  pki.my-trusted-bundle.trusted=true  \
  pki.dont-trust-these.data="N3W..."  \
  pki.dont-trust-there.trusted=false

You can use this method from within a bootstrap container, if your user data is over the size limit of the platform.

Host containers settings

  • settings.host-containers.admin.source: The URI of the admin container.
  • settings.host-containers.admin.enabled: Whether the admin container is enabled.
  • settings.host-containers.admin.superpowered: Whether the admin container has high levels of access to the Bottlerocket host.
  • settings.host-containers.control.source: The URI of the control container.
  • settings.host-containers.control.enabled: Whether the control container is enabled.
  • settings.host-containers.control.superpowered: Whether the control container has high levels of access to the Bottlerocket host.
Custom host containers

admin and control are our default host containers, but you're free to change this. Beyond just changing the settings above to affect the admin and control containers, you can add and remove host containers entirely. As long as you define the three fields above -- source with a URI, and enabled and superpowered with true/false -- you can add host containers with an API call or user data.

You can optionally define a user-data field with arbitrary base64-encoded data, which will be made available in the container at /.bottlerocket/host-containers/$HOST_CONTAINER_NAME/user-data and (since Bottlerocket v1.0.8) /.bottlerocket/host-containers/current/user-data. (It was inspired by instance user data, but is entirely separate; it can be any data your host container feels like interpreting.)

Keep in mind that the default admin container (since Bottlerocket v1.0.6) relies on user-data to store SSH keys. You can set user-data to customize the keys, or you can use it for your own purposes in a custom container.

Here's an example of adding a custom host container with API calls:

apiclient set \
   host-containers.custom.source=MY-CONTAINER-URI \
   host-containers.custom.enabled=true \
   host-containers.custom.superpowered=false

Here's the same example, but with the settings you'd add to user data:

[settings.host-containers.custom]
enabled = true
source = "MY-CONTAINER-URI"
superpowered = false

If the enabled flag is true, it will be started automatically.

All host containers will have the apiclient binary available at /usr/local/bin/apiclient so they're able to interact with the API. You can also use apiclient to run programs in other host containers. For example, to access the admin container:

apiclient exec admin bash

In addition, all host containers come with persistent storage that survives reboots and container start/stop cycles. It's available at /.bottlerocket/host-containers/$HOST_CONTAINER_NAME and (since Bottlerocket v1.0.8) /.bottlerocket/host-containers/current. The default admin host-container, for example, stores its SSH host keys under /.bottlerocket/host-containers/admin/etc/ssh/.

There are a few important caveats to understand about host containers:

  • They're not orchestrated. They only start or stop according to that enabled flag.
  • They run in a separate instance of containerd than the one used for orchestrated containers like Kubernetes pods.
  • They're not updated automatically. You need to update the source, disable the container, commit those changes, then re-enable it.
  • If you set superpowered to true, they'll essentially have root access to the host.

Because of these caveats, host containers are only intended for special use cases. We use them for the control container because it needs to be available early to give you access to the OS, and for the admin container because it needs high levels of privilege and because you need it to debug when orchestration isn't working.

Be careful, and make sure you have a similar low-level use case before reaching for host containers.

Bootstrap containers settings

  • settings.bootstrap-containers.<name>.source: the image for the container
  • settings.bootstrap-containers.<name>.mode: the mode of the container, it could be one of off, once or always. See below for a description of modes.
  • settings.bootstrap-containers.<name>.essential: whether or not the container should fail the boot process, defaults to false
  • settings.bootstrap-containers.<name>.user-data: field with arbitrary base64-encoded data

Bootstrap containers are host containers that can be used to "bootstrap" the host before services like ECS Agent, Kubernetes, and Docker start.

Bootstrap containers are very similar to normal host containers; they come with persistent storage and with optional user data. Unlike normal host containers, bootstrap containers can't be treated as superpowered containers. However, bootstrap containers do have additional permissions that normal host containers do not have. Bootstrap containers have access to the underlying root filesystem on /.bottlerocket/rootfs as well as to all the devices in the host, and they are set up with the CAP_SYS_ADMIN capability. This allows bootstrap containers to create files, directories, and mounts that are visible to the host.

Bootstrap containers are set up to run after the systemd configured.target unit is active. The containers' systemd unit depends on this target (and not on any of the bootstrap containers' peers) which means that bootstrap containers will not execute in a deterministic order. The boot process will "wait" for as long as the bootstrap containers run. Bootstrap containers configured with essential=true will stop the boot process if they exit code is a non-zero value.

Bootstrap containers have three different modes:

  • always: with this setting, the container is executed on every boot.
  • off: the container won't run
  • once: with this setting, the container only runs on the first boot where the container is defined. Upon completion, the mode is changed to off.

Here's an example of adding a bootstrap container with API calls:

apiclient set \
   bootstrap-containers.bootstrap.source=MY-CONTAINER-URI \
   bootstrap-containers.bootstrap.mode=once \
   bootstrap-containers.bootstrap.essential=true

Here's the same example, but with the settings you'd add to user data:

[settings.bootstrap-containers.bootstrap]
source = "MY-CONTAINER-URI"
mode = "once"
essential = true
Mount propagations in bootstrap and superpowered containers

Both bootstrap and superpowered host containers are configured with the /.bottlerocket/rootfs/mnt bind mount that points to /mnt in the host, which itself is a bind mount of /local/mnt. This bind mount is set up with shared propagations, so any new mount point created underneath /.bottlerocket/rootfs/mnt in any bootstrap or superpowered host container will propagate across mount namespaces. You can use this feature to configure ephemeral disks attached to your hosts that you may want to use on your workloads.

Platform-specific settings

Platform-specific settings are automatically set at boot time by early-boot-config based on metadata available on the running platform. They can be overridden for testing purposes in the same way as other settings.

AWS-specific settings

AWS-specific settings are automatically set based on calls to the Instance MetaData Service (IMDS).

  • settings.aws.region: This is set to the AWS region in which the instance is running, for example us-west-2.

Logs

You can use logdog through the admin container to obtain an archive of log files from your Bottlerocket host. SSH to the Bottlerocket host or apiclient exec admin bash to access the admin container, then run:

sudo sheltie
logdog

This will write an archive of the logs to /var/log/support/bottlerocket-logs.tar.gz. You can use SSH to retrieve the file. Once you have exited from the Bottlerocket host, run a command like:

ssh -i YOUR_KEY_FILE \
    ec2-user@YOUR_HOST \
    "cat /.bottlerocket/rootfs/var/log/support/bottlerocket-logs.tar.gz" > bottlerocket-logs.tar.gz

(If your instance isn't accessible through SSH, you can use SSH over SSM.)

For a list of what is collected, see the logdog command list.

Kdump Support

Bottlerocket provides support to collect kernel crash dumps whenever the system kernel panics. Once this happens, both the dmesg log and vmcore dump are stored at /var/log/kdump, and the system reboots.

There are a few important caveats about the provided kdump support:

  • Currently, only vmware variants have kdump support enabled
  • The system kernel will reserve 256MB for the crash kernel, only when the host has at least 2GB of memory; the reserved space won't be available for processes running in the host
  • The crash kernel will only be loaded when the crashkernel parameter is present in the kernel's cmdline and if there is memory reserved for it

Details

Security

🛡️ 🦀

To learn more about security features in Bottlerocket, please see SECURITY FEATURES. It describes how we use features like dm-verity and SELinux to protect the system from security threats.

To learn more about security recommendations for Bottlerocket, please see SECURITY GUIDANCE. It documents additional steps you can take to secure the OS, and includes resources such as a Pod Security Policy for your reference.

In addition, almost all first-party components are written in Rust. Rust eliminates some classes of memory safety issues, and encourages design patterns that help security.

Packaging

Bottlerocket is built from source using a container toolchain. We use RPM package definitions to build and install individual packages into an image. RPM itself is not in the image - it's just a common and convenient package definition format.

We currently package the following major third-party components:

For further documentation or to see the rest of the packages, see the packaging directory.

Updates

The Bottlerocket image has two identical sets of partitions, A and B. When updating Bottlerocket, the partition table is updated to point from set A to set B, or vice versa.

We also track successful boots, and if there are failures it will automatically revert back to the prior working partition set.

The update process uses images secured by TUF. For more details, see the update system documentation.

API

There are two main ways you'd interact with a production Bottlerocket instance. (There are a couple more exploration methods above for test instances.)

The first method is through a container orchestrator, for when you want to run or manage containers. This uses the standard channel for your orchestrator, for example a tool like kubectl for Kubernetes.

The second method is through the Bottlerocket API, for example when you want to configure the system.

There's an HTTP API server that listens on a local Unix-domain socket. Remote access to the API requires an authenticated transport such as SSM's RunCommand or Session Manager, as described above. For more details, see the apiserver documentation.

The apiclient can be used to make requests. They're just HTTP requests, but the API client simplifies making requests with the Unix-domain socket.

To make configuration easier, we have early-boot-config, which can send an API request for you based on instance user data. If you start a virtual machine, like an EC2 instance, it will read TOML-formatted Bottlerocket configuration from user data and send it to the API server. This way, you can configure your Bottlerocket instance without having to make API calls after launch.

See Settings above for examples and to understand what you can configure.

You can also access host containers through the API using apiclient exec.

The server and client are the user-facing components of the API system, but there are a number of other components that work together to make sure your settings are applied, and that they survive upgrades of Bottlerocket.

For more details, see the API system documentation.

Default Volumes

Bottlerocket operates with two default storage volumes.

Comments
  • metricdog: anonymous bottlerocket metrics

    metricdog: anonymous bottlerocket metrics

    Issue Number

    Closes #1000

    Description

    Adds a program and systemd service that checks whether certain critical services are running. Reports this to a URL via GET request.

    Testing

    Unit Testing

    I organized the code in such a way as to facilitate unit testing everything except the calls to systemctl.

    Happy Integ

    I set up an S3 bucket and confirmed that metricdog requests are working. I ran metricdog manually from the command line under various scenarious and traced the URL that was produced.

    Healthy Health Ping

    metricdog --log-level trace send-health-ping
    

    Result:

    https://somedomain.com/metricstest?sender=metricdog&event=health-ping&version=0.4.1&variant=aws-k8s-1.15&arch=x86_64&region=us-west-2&seed=1712&version-lock=latest&ignore-waves=false&failed_services=&is_healthy=true
    

    Boot Success

    metricdog --log-level trace send-boot-success
    
    https://somedomain.com/metricstest?sender=metricdog&event=boot-success&version=0.4.1&variant=aws-k8s-1.15&arch=x86_64&region=us-west-2&seed=1712&version-lock=latest&ignore-waves=false
    

    One Failed Service

    I 'broke' kubelet and ran a health ping, and received:

    https://somedomain.com/metricstest?sender=metricdog&event=health-ping&version=0.4.1&variant=aws-k8s-1.15&arch=x86_64&region=us-west-2&seed=1712&version-lock=latest&ignore-waves=false&failed_services=kubelet%3A255&is_healthy=false
    

    The failed services list de-escapes to kubelet:255

    Two Failed Services

    I 'broke' an additional service:

    https://somedomain.com/metricstest?sender=metricdog&event=health-ping&version=0.4.1&variant=aws-k8s-1.15&arch=x86_64&region=us-west-2&seed=1605&version-lock=latest&ignore-waves=false&failed_services=containerd%3A1%2Ckubelet%3A255&is_healthy=false
    

    The failed services list de-escapes to containerd:1,kubelet:255.

    Bad URL Integ

    To make sure that a bad URL doesn't cause problems, I ran an instance with a fake URL. I found that that, send-boot-success and send-health-ping each caused an error to be reported in the system journal, but no other ill affects were observed.

    Systemd

    I ran hosts and observed that the systemd unit and timer work as expected.

    Edit 8/28/2020: I tested an ECS variant with ab6930c and found the the URLs are still being constructed properly, including the reporting of failed services.

    Migration

    I tested the migration through upgrade and downgrade, confirming that settings were added/removed and the system was healthy at all three waypoints.

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by webern 38
  • build only the packages needed for the current variant

    build only the packages needed for the current variant

    Issue number:

    Closes #1361

    Description of changes:

    Utilize cargo dependency graph to build only the packages we need for the current variant.

    Packages express a dependency on anything that the rpm spec file declares a dependency on. (We already did this for BuildRequires but now we do it for Requires as well.) Additionally, the variant's Cargo.toml needs to express the things the package crates it depends on.

    Note: We now rely on a change in behavior in cargo 1.51.0 to allow us to build a package that is a dependency of a workspace, but which is not a workspace member. You will need to rustup update if you have an older version.

        build: refactor os package dependencies
    
        Signed-off-by: Ben Cressey <[email protected]>
    
        Change os.spec such that all first party code is installed when the os
        package is installed. Change release.spec such that it requires only the
        os package for first-party code instead of requiring each individual
        first-party package.
    
        This consolidates the package dependencies for first-party code into
        one place, instead of picking some of the packages in "release" and
        others through the variant definition.
    
        This is part of #1361. We want to better manage package dependencies
        with cargo so that we can build only the packages needed for a given
        variant, and this change eliminates some one-off first-party package
        dependencies that needed to be expressed at the variant level.
    
        packages: express installation dependencies in cargo
    
        Previously we expressed RPM BuildRequires dependencies in Cargo's
        dependency graph to ensure the necessary RPM's exist before we build
        a package. If we add to this RPM Requires dependencies (dependencies
        that are needed when software packages are installed), then we can use
        Cargo to select only the packages that are needed for a set of desired
        install packages.
    
        build: only build packages for current variant
    
        Adds the packages that are needed to build a variant to the variant's
        Cargo dependencies. Previously the build-variant makefile target assumed
        that all packages were pre-built. Instead we now tell Cargo to build the
        packages we need for the variant.
    
        Creates a workspace in the variants directory and removes the workspace
        in the packages directory. Updates the makefile accordingly. Now only
        the packages that are part of the dependency tree starting with the
        variant will be built.
    
        The build-kmod-kit target previously depended on build-packages, which
        is no longer available. So a new target, build-package, and a
        specialization of it, build-kernel, have been added. These use the same
        workspace context that is used when building variants to ensure that the
        same rpm is used by build-kmod-kit.
    
        makefile: remove world targets
    
        cargo make world targets were vestigial. Probably nobody is using them
        so we cleaned them up.
    

    Testing done:

    Starting from a clean build state:

    • ran cargo make -e BUILDSYS_VARIANT=aws-k8s-1.18
      • Observed that only k8s 1.18 was built (other k8s versions were not)
      • build succeeded
    • Without cleaning, ran cargo make -e BUILDSYS_VARIANT=aws-k8s-1.19
      • Observed that existing packages (other than os and release) were not rebuilt.
      • Saw that k8s 1.19 was built.
    • Without cleaning, ran cargo make -e BUILDSYS_VARIANT=aws-k8s-1.18 repo
      • Observed nothing was rebuilt.
    • Without cleaning, ran cargo make -e BUILDSYS_VARIANT=aws-k8s-1.19 repo
      • Observed that nothing was rebuilt.
    • find . -type d -name target
      • ./tools/target
      • ./variants/target

    Starting from a clean build state:

    • ran cargo make build-kmod-kit
    • Observed the kernel being built.
    • target succeeded
    • find . -type d -name target
      • ./tools/target
      • ./variants/target

    Starting from the above kmod-kit build state:

    • Ran an x86_64 aws-k8s-1.17 variant and ran a pod.
    • Ran an aarch64 aws-ecs-1 variant and ran a task.
    • Ran an aarch64 aws-dev variant and logged in.

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by webern 32
  • CloudFormation signaling

    CloudFormation signaling

    What I'd like: It would be nice if there was an easy way to call CloudFormation's SignalResource when booting a Bottlerocket instance. This is typically considered a best practice when creating an ASG in CloudFormation so that it can roll back to an earlier LaunchTemplate or LaunchConfig if the instances don't come online.

    See, for example, the ECS CloudFormation reference architecture, which uses the cfn-signal CLI: https://github.com/aws-samples/ecs-refarch-cloudformation/blob/a257e226b33bd9d2a721e5afd9d7e8b66dbacfdc/infrastructure/ecs-cluster.yaml#L87

    In Bottlerocket's case, a typical boot issue I've encountered is passing malformed user data. In such a case, Bottlerocket's early-boot-config.service will fail. But if you don't signal CloudFormation, CloudFormation will still consider the deploy a success, potentially leaving you with no working instances.

    Any alternatives you've considered:

    Running cfn-signal in a bootstrap container would probably work. But it's not clear to me that bootstrap containers run late enough in the boot sequence to verify that all services are up.

    type/enhancement area/core 
    opened by gabegorelick 28
  • BottlerocketOS v1.7.0 not supported by AWS Inspector

    BottlerocketOS v1.7.0 not supported by AWS Inspector

    Hi, I've noticed that latest BottlerocketOS images are unsupported by AWS Inspector. I am curious what broke Inspector support, because v1.6.2 is supported.

    opened by dusansusic 25
  • Update API

    Update API

    Issue number: N/A

    Adds a set of API endpoints to apiserver that exposes OS update information and manages OS updates. Adds a rust binary, thar-be-updates that apiserver uses to dispatch update commands. Adds new updog changes to support the update API

    thar-be-updates models the Bottlerocket update process after a state machine: update-api-states-mark2(6)

    Description of changes:

    Author: Erikson Tung <[email protected]>
    Date:   Mon Jun 8 14:30:21 2020 -0700
    
        models: new 'FriendlyVersion' model type
        
        Adds a new model type to represent versions that can optionally be
        prefixed with 'v' or represeted by "latest"
    
    Author: Erikson Tung <[email protected]>
    Date:   Mon Jun 8 14:33:16 2020 -0700
    
        settings: add new 'version-lock' and 'ignore-waves' settings
        
        Adds a 'version-lock' setting for specifying the version to update to
        when updating via the API.
        
        Adds a 'ignore-waves' setting for specifying whether to respect update
        waves when updating via the API.
    
    Author: Erikson Tung <[email protected]>
    Date:   Thu Jun 11 09:14:08 2020 -0700
    
        signpost::State: make next(), active(), inactive() public
        
        Exposes 'next()', 'active()', 'inactive()' and consequently 'SetSelect' fo
    r
        external crate use.
    
    Author: Erikson Tung <[email protected]>
    Date:   Wed Jun 17 11:04:03 2020 -0700
    
        updog: new subcommand to revert 'update-apply'
        
        Adds a new subcommand for reverting actions done by 'update-apply'.
        Used to "deactivate" an update.
    
    Author: Erikson Tung <[email protected]>
    Date:   Wed Jun 17 10:52:25 2020 -0700
    
        updog: use version-lock and ignore-wave setting for default behavior
        
        updog reads the 'version-lock' and 'ignore-wave' settings for
        determining default update behavior. Still allows overrides via the
        command line options.
    
    Author: Erikson Tung <[email protected]>
    Date:   Thu Jun 11 10:20:42 2020 -0700
    
        thar-be-updates: a Bottlerocket update dispatcher
        
        Adds a new crate 'thar-be-updates' that serves as the interface for
        `apiserver` to dispatch updates.
    
    Author: Erikson Tung <[email protected]>
    Date:   Thu Jun 11 13:52:40 2020 -0700
    
        apiserver: adds update actions API
        
        Extends the Bottlerocket API with endpoints to do updates.
    
        New action endpoints:
    
        * /actions/refresh-update
        * /actions/prepare-update
        * /actions/activate-update
        * /actions/deactivate-update
        * /updates/status
    
        'apiserver' uses 'thar-be-updates' to dispatch update commands.
    

    Testing done:

    Testing the updog changes:

    Build an image with release version set to 0.3.2, launched instance where version-lock is set to 0.3.3. Then tried the following:

    Updog configuration generated properly.

    bash-5.0# cat /etc/updog.toml 
    metadata_base_url = "https://updates.bottlerocket.aws/2020-02-02/aws-k8s-1.15/x86_64/"
    targets_base_url = "https://updates.bottlerocket.aws/targets/"
    seed = 191
    version_lock = "v0.3.3"
    ignore_waves = false
    bash-5.0# cat /etc/os-release 
    NAME=Bottlerocket
    ID=bottlerocket
    PRETTY_NAME="Bottlerocket OS 0.3.2"
    VARIANT_ID=aws-k8s-1.15
    VERSION_ID=0.3.2
    BUILD_ID=7192d50a-dirty
    

    Initiate update with updog update-image and see that by default, updog updates to the version-locked version (0.3.3 in this case).

    bash-5.0# updog whats      
    aws-k8s-1.15 0.3.3
    bash-5.0# updog whats --all
    aws-k8s-1.15 0.3.4
    aws-k8s-1.15 0.3.3
    aws-k8s-1.15 0.3.2
    aws-k8s-1.15 0.3.1
    aws-k8s-1.15 0.3.0
    bash-5.0# updog update-image  
    Starting update to 0.3.3
    Update applied: aws-k8s-1.15 0.3.3
    

    I could also update the boot flags and then revert it via updog revert-update-apply

    bash-5.0# updog update-apply
    bash-5.0# signpost status
    OS disk: /dev/nvme0n1
    Set A:   boot=/dev/nvme0n1p2 root=/dev/nvme0n1p3 hash=/dev/nvme0n1p4 priority=1 tries_left=0 successful=true
    Set B:   boot=/dev/nvme0n1p6 root=/dev/nvme0n1p7 hash=/dev/nvme0n1p8 priority=2 tries_left=1 successful=false
    Active:  Set A
    Next:    Set B
    
    bash-5.0# updog revert-update-apply   
    bash-5.0# signpost status          
    OS disk: /dev/nvme0n1
    Set A:   boot=/dev/nvme0n1p2 root=/dev/nvme0n1p3 hash=/dev/nvme0n1p4 priority=2 tries_left=0 successful=true
    Set B:   boot=/dev/nvme0n1p6 root=/dev/nvme0n1p7 hash=/dev/nvme0n1p8 priority=0 tries_left=1 successful=false
    Active:  Set A
    Next:    Set A
    
    bash-5.0# updog update-apply       
    bash-5.0# signpost status   
    OS disk: /dev/nvme0n1
    Set A:   boot=/dev/nvme0n1p2 root=/dev/nvme0n1p3 hash=/dev/nvme0n1p4 priority=1 tries_left=0 successful=true
    Set B:   boot=/dev/nvme0n1p6 root=/dev/nvme0n1p7 hash=/dev/nvme0n1p8 priority=2 tries_left=1 successful=false
    Active:  Set A
    Next:    Set B
    bash-5.0# 
    

    Testing the update API

    Built custom image with a 0.3.2 version tag. Launched instance, tried the following (version-lock set to 'latest'): (I pretty-printed the json output for clarity)

    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    Status 404 when GETing unix://2f72756e2f6170692e736f636b:0/updates/status: Update status is uninitialized, refresh-updates to initialize it
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /actions/refresh-updates -m POST
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    {
      "update_state": "Available",
      "available_updates": [
        "0.4.0",
        "0.3.4",
        "0.3.3",
        "0.3.2",
        "0.3.1",
        "0.3.0"
      ],
      "chosen_update": {
        "arch": "x86_64",
        "version": "0.4.0",
        "variant": "aws-k8s-1.15"
      },
      "active_partition": {
        "image": {
          "arch": "x86_64",
          "version": "0.3.4",
          "variant": "aws-k8s-1.15"
        },
        "next_to_boot": true
      },
      "staging_partition": null,
      "most_recent_command": {
        "cmd_type": "refresh",
        "cmd_status": "Success",
        "timestamp": "2020-07-10T06:42:52.284285751Z",
        "exit_status": 0,
        "stderr": ""
      }
    }
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -v -u /actions/prepare-update -m POST
    204 No Content
    

    Here you can see apiserver trying to obtain the shareable lock to read the update status file. Caller gets 423 responses when the lock is held by t-b-u.

    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    Status 423 when GETing unix://2f72756e2f6170692e736f636b:0/updates/status: Unable to obtain shared lock for reading update status: Resource temporarily unavailable (os error 11)
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    Status 423 when GETing unix://2f72756e2f6170692e736f636b:0/updates/status: Unable to obtain shared lock for reading update status: Resource temporarily unavailable (os error 11)
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    Status 423 when GETing unix://2f72756e2f6170692e736f636b:0/updates/status: Unable to obtain shared lock for reading update status: Resource temporarily unavailable (os error 11)
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    Status 423 when GETing unix://2f72756e2f6170692e736f636b:0/updates/status: Unable to obtain shared lock for reading update status: Resource temporarily unavailable (os error 11)
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    {
      "update_state": "Staged",
      "available_updates": [
        "0.4.0",
        "0.3.4",
        "0.3.3",
        "0.3.2",
        "0.3.1",
        "0.3.0"
      ],
      "chosen_update": {
        "arch": "x86_64",
        "version": "0.4.0",
        "variant": "aws-k8s-1.15"
      },
      "active_partition": {
        "image": {
          "arch": "x86_64",
          "version": "0.3.4",
          "variant": "aws-k8s-1.15"
        },
        "next_to_boot": true
      },
      "staging_partition": {
        "image": {
          "arch": "x86_64",
          "version": "0.4.0",
          "variant": "aws-k8s-1.15"
        },
        "next_to_boot": false
      },
      "most_recent_command": {
        "cmd_type": "prepare",
        "cmd_status": "Success",
        "timestamp": "2020-07-10T06:44:27.982261529Z",
        "exit_status": 0,
        "stderr": "Starting update to 0.4.0\n"
      }
    }
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /actions/activate-update -m POST
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    {
      "update_state": "Ready",
      "available_updates": [
        "0.4.0",
        "0.3.4",
        "0.3.3",
        "0.3.2",
        "0.3.1",
        "0.3.0"
      ],
      "chosen_update": {
        "arch": "x86_64",
        "version": "0.4.0",
        "variant": "aws-k8s-1.15"
      },
      "active_partition": {
        "image": {
          "arch": "x86_64",
          "version": "0.3.4",
          "variant": "aws-k8s-1.15"
        },
        "next_to_boot": false
      },
      "staging_partition": {
        "image": {
          "arch": "x86_64",
          "version": "0.4.0",
          "variant": "aws-k8s-1.15"
        },
        "next_to_boot": true
      },
      "most_recent_command": {
        "cmd_type": "activate",
        "cmd_status": "Success",
        "timestamp": "2020-07-10T06:47:19.903337270Z",
        "exit_status": 0,
        "stderr": ""
      }
    }
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /actions/deactivate-update -m POST
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    {"update_state":"Staged","available_updates":["0.4.0","0.3.4","0.3.3","0.3.2","0.3.1","0.3.0"],"chosen_update":{"arch":"x86_64","version":"0.4.0","variant":"aws-k8s-1.15"},"active_partition":{"image":{"arch":"x86_64","version":"0.3.4","variant":"aws-k8s-1.15"},"next_to_boot":true},"staging_partition":{"image":{"arch":"x86_64","version":"0.4.0","variant":"aws-k8s-1.15"},"next_to_boot":false},"most_recent_command":{"cmd_type":"deactivate","cmd_status":"Success","timestamp":"2020-07-10T15:55:09.238751262Z","exit_status":0,"stderr":""}}
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /actions/activate-update -m POST
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /updates/status
    {"update_state":"Ready","available_updates":["0.4.0","0.3.4","0.3.3","0.3.2","0.3.1","0.3.0"],"chosen_update":{"arch":"x86_64","version":"0.4.0","variant":"aws-k8s-1.15"},"active_partition":{"image":{"arch":"x86_64","version":"0.3.4","variant":"aws-k8s-1.15"},"next_to_boot":false},"staging_partition":{"image":{"arch":"x86_64","version":"0.4.0","variant":"aws-k8s-1.15"},"next_to_boot":true},"most_recent_command":{"cmd_type":"activate","cmd_status":"Success","timestamp":"2020-07-10T15:55:51.310418363Z","exit_status":0,"stderr":""}}
    [ec2-user@ip-192-168-0-240 ~]$ apiclient -u /actions/reboot -m POST
    

    Please let me know if there are any other scenarios you'd like me to run through.

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by etungsten 22
  • Access between host containers

    Access between host containers

    What I'd like:

    I'd like to be able to get into the admin container more easily and securely.

    Currently, you SSM to control, make API calls to enable admin, exit, then SSH to admin, and you have to be sure your instance is accessible from the internet via IGW and public IP.

    If the control container had an SSH client, you could ssh from it to the admin container directly, without exiting and initiating an SSH session through the Internet, saving half a step and remaining in your VPC.

    Any alternatives you've considered:

    The SSM agent in the control container could do clever things to get us into the admin container, for example discriminate on user ID, or have parameters in a custom SSM document. These are longer term ideas, though?

    type/enhancement 
    opened by tjkirch 21
  • updog and migrator: signed migrations

    updog and migrator: signed migrations

    Issue number:

    Closes #91 Closes #905

    Description of changes:

    Implements all changes necessary for migrator to load a locally cached tuf repo and obtain migrations from there. See #905 for a detailed explanation of the aigned migrations design.

    Additionally, we decided to maintain compatibility so that instances can be upgraded into signed migrations. To facilitate this, in the current version, upog writes both signed and unsigned migrations. migrator checks the from version. If it's less than 0.3.5, migrator executes unsigned migrations, otherwise it executes signed migrations.

    In a future version, we can create a breaking change in which unsigned migrations are no-longer read. At that point, versions less than 0.3.5 must pass through a version that supports both signed and unsigned migrations (such as this one) in order to upgrade into signed migrations.

    There is one behavioral change about signed migrations. Previously we used a sort of the migration names to provide stability to their order of execution (when more than one migration existed for a single version pair). With signed migrations, this is no longer necessary and the migrations will run in the order that they are listed in the manifest.

    Testing done:

    The testing procedure is extensive. Note: a trace was added so that migrator announces in the system journal whether it is running signed or unsigned migrations. Also note, v0.3.1 was used as a starting point because a migration exists for v0.3.2 (which changes the host container version). Thus upgrading from v0.3.1 presents an actual unsigned migration.

    Test Setup

    • Create an AMI of v0.3.1, but with my non-production root.json.
    • Create matching image binaries for v0.3.1 with my non-production root.json and add them to a TUF repo (replacing the prod images).
    • Create a version of the code in this PR, naming the version 0.99.0, create an AMI of this and add the update binaries to the TUF repo/manifest.
    • Create a version of the code in this PR, along with a migration that adds a foo setting and sets its value to bar. Call this v0.99.1 and add it to the TUF repo/manifest.

    Test Execution

    Starting from the v0.3.1 AMI, perform the following sequence of upgrades and downgrades.

    • Start at v0.3.1 via AMI
    • Upgrade v0.3.1 -> v0.99.0
    • Upgrade v0.99.0 -> v0.99.1
    • Downgrade v0.99.1 -> v0.99.0
    • Downgrade v0.99.0 -> v0.3.1

    At each step along the way.

    • Ensure that Kubernetes is working by running a busybox pod.
    • Copy the system journal and Bottlerocket API settings to local storage.

    At the end of the cycle, diff the system journals to observe that migrator annouced:

    • Upgrade v0.3.1 -> v0.99.0 'running unsigned migrations'
    • Upgrade v0.99.0 -> v0.99.1 'running signed migrations'
    • Downgrade v0.99.1 -> v0.99.0 'running signed migrations'
    • Downgrade v0.99.0 -> v0.3.1: older version of migrator made no announcement

    Diff the API settings to observe the following setting changes:

    • Upgrade v0.3.1 -> v0.99.0: host container changed from v0.4.0 to v0.5.0
    • Upgrade v0.99.0 -> v0.99.1: "foo": "bar" was added
    • Downgrade v0.99.1 -> v0.99.0: "foo": "bar" was removed
    • Downgrade v0.99.0 -> v0.3.1: host container changed from v0.5.0 to v0.4.0

    Additional Testing

    Additional testing has been performed with v0.99.0 AMI -> v0.99.1 -> v0.99.0, though this has not been repeated at every testing cycle.

    TODO

    • [x] Use pentacle
    • [x] Use tough::RepoEditor to create the migrator unit test repo on the fly. (pushed)
    • [x] Use a single migration script for testing instead of two different migration scripts, and compress the migration script on the fly instead of checking in a compressed version of it. (pushed)
    • [x] Set repo expiration to 1970-01-01 to make it obvious that it is expired. Also make a note about this in the test. (pushed)

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by webern 20
  • Add aws-k8s-1.23, vmware-k8s-1.23, aws-k8s-1.23-nvidia, metal-k8s-1.23 variants

    Add aws-k8s-1.23, vmware-k8s-1.23, aws-k8s-1.23-nvidia, metal-k8s-1.23 variants

    Issue number: Resolves https://github.com/bottlerocket-os/bottlerocket/issues/2139

    Description of changes:

    For reference, the addition of k8s-1.22 variants was in https://github.com/bottlerocket-os/bottlerocket/pull/1962

        k8s-1.23: enable `CSIMigration*` feature gates in kubelet
        
        This enables the CSIMigration features gates that enables migration from
        in-tree CSI plugins to solution-specific CSI drivers.
        
        EKS 1.23 is planning to enable CSIMigrationAWS by default.
        
        vSphere container storage plugin migration has been in beta since 1.19
        and the in-tree plugin does not support the newest features in the
        vSphere CSI driver.
        
        Upstream kubernetes is eventually going to deprecate in-tree CSI
        plugins.
    
        metal-k8s: limit supported arch to x86_64
    
        Add metal-k8s-1.23 variant
        
        This change adds an additional variant `metal-k8s-1.23`, which includes
        necessary Kubernetes packages and settings for running Bottlerocket on
        metal in a Kubernetes v1.23 cluster.
    
    
        Add vmware-k8s-1.23 variant
        
        Adds vmware-k8s-1.23 variant, relinks symlinks in models
        vmware-k8s-1.23 supports boot config settings in models.
    
        Rename symlink for oci-hooks in vmware-k8s-1.22
    
    
        Add aws-k8s-1.23-nvidia variant
        
        Adds aws-k8s-1.23-nvidia variant, create symlinks for previous k8s
        version nvidia variant in models.
        
        aws-k8s-1.23-nvidia supports boot config settings while other nvidia
        variants do not.
    
        Add aws-k8s-1.23 variant
        
        Adds aws-k8s-1.23 variant, relinks symlinks in models.
        aws-k8s-1.23 supports settings.boot while older aws-k8s-* variants do not.
    
        containerd,k8s: add new containerd-config templates, update defaults
        
        For K8s 1.23, we're having kubelet use `/run/containerd/containerd.sock`
        for the container runtime socket instead of dockershim.sock.
        
        For compatiblity purposes, K8s 1.23 is maintaining a symlink at
        /run/dockershim.sock. For existing older K8s variants, we're keeping
        dockershim.sock as the main socket for now to avoid breakage until we
        can give sufficient notice of us deprecating dockershim.
    
    
        packages/kubernetes: move ExecStartPre for pause ctr pull to drop-in
        
        This creates a drop-in file for pulling the pause container image with
        host-ctr in `kubelet.service`.
    
        packages: add kubernetes-1.23
        
        This adds K8s 1.23 package. We're using K8s 1.23 source from EKS-D.
        
        1.23 Kubelet uses `/run/containerd/containerd.sock` as the container
        runtime socket due to the upcoming dockershim.sock deprecation.
        We also create a symlink for the container runtime socket to be
        `/run/dockershim.sock` for compatibility purposes.
    

    Testing done:

    Testing the containerd socket changes. In 1.23, kubelet starts up fine:

    bash-5.1# systemctl status kubelet
    ● kubelet.service - Kubelet
         Loaded: loaded (/x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/system/kubelet.service; enabled; vendor preset: enabled)
        Drop-In: /x86_64-bottlerocket-linux-gnu/sys-root/usr/lib/systemd/system/kubelet.service.d
                 └─dockershim-symlink.conf
                 /etc/systemd/system/kubelet.service.d
                 └─exec-start.conf
         Active: active (running) since Fri 2022-06-03 20:33:17 UTC; 6min ago
           Docs: https://github.com/kubernetes/kubernetes
        Process: 1640 ExecStartPre=/sbin/iptables -P FORWARD ACCEPT (code=exited, status=0/SUCCESS)
        Process: 1645 ExecStartPre=/usr/bin/host-ctr --containerd-socket=/run/containerd/containerd.sock --namespace=k8s.io pull-image --source=${POD_INFRA_CONTAINER_IMAGE} --registry-config=/etc/host-containers/host-ctr.toml (code=exited, status=0/SUCCESS)
        Process: 1669 ExecStartPre=/bin/ln -sf /run/containerd/containerd.sock /run/dockershim.sock (code=exited, status=0/SUCCESS)
       Main PID: 1670 (kubelet)
          Tasks: 16 (limit: 9168)
         Memory: 190.1M
            CPU: 3.420s
         CGroup: /runtime.slice/kubelet.service
                 └─1670 /usr/bin/kubelet --cloud-provider aws --kubeconfig /etc/kubernetes/kubelet/kubeconfig --config /etc/kubernetes/kubelet/config --container-runtime=remote --container-runtime-endpoint=unix:///run/containerd/containerd.sock --containerd=/run/containerd/containerd.sock --network-plugin cni --root-dir /var/lib/kubelet --cert-dir /var/lib/kubelet/pki --node-ip 2600:1f14:fb6:1202:8cc4:826:20a6:1334 --node-labels "" --register-with-taints "" --pod-infra-container-image 602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/pause:3.1-eksbuild.1
    

    dockershim.sock a symlink to containerd.sock as expected and is functional:

    bash-5.1# ls -al /run/dockershim.sock 
    lrwxrwxrwx. 1 root root 31 Jun  3 20:33 /run/dockershim.sock -> /run/containerd/containerd.sock
    bash-5.1# ctr -a /run/dockershim.sock -n k8s.io containers ls
    CONTAINER                                                           IMAGE                                                                                  RUNTIME                  
    22f11405fe002530e79d4dc496f1f4bde565f4ab50343d81a9dc5143cc922880    602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.22.6-eksbuild.1         io.containerd.runc.v2    
    ...
    bash-5.1# ctr -a /run/containerd/containerd.sock -n k8s.io containers ls
    CONTAINER                                                           IMAGE                                                                                  RUNTIME                  
    22f11405fe002530e79d4dc496f1f4bde565f4ab50343d81a9dc5143cc922880    602401143452.dkr.ecr.us-west-2.amazonaws.com/eks/kube-proxy:v1.22.6-eksbuild.1         io.containerd.runc.v2    
    ...
    
    • [x] Pods run OK
    • [x] Conformance testing with 1.23 control plane and x86_64 aws-k8s-1.23 nodes
          "plugin": "e2e",
          "node": "global",
          "status": "complete",
          "result-status": "passed",
          "result-counts": {
            "passed": 346,
            "skipped": 6698
          },
    
    • [x] Conformance testing with 1.23 control plane and aarch64 aws-k8s-1.23 nodes
          "plugin": "e2e",
          "node": "global",
          "status": "complete",
          "result-status": "passed",
          "result-counts": {
            "passed": 346,
            "skipped": 6698
          },
    
    • [x] Conformance testing with 1.23 control plane and x86_64 aws-k8s-1.23 nodes in an IPv6 cluster
          "plugin": "e2e",
          "node": "global",
          "status": "complete",
          "result-status": "passed",
          "result-counts": {
            "passed": 346,
            "skipped": 6698
          },
    
    • [x] Conformance testing with 1.23 control plane and x86_64 vmware-k8s-1.23 nodes
    Sonobuoy has completed. Use `sonobuoy retrieve` to get results.
    Plugin: e2e
    Status: passed
    Total: 7044
    Passed: 346
    Failed: 0
    Skipped: 6698
    
    
    • [x] Confirm the EBS CSI driver still works.
    $ kubectl get pvc block-claim 
    NAME          STATUS   VOLUME                                     CAPACITY   ACCESS MODES   STORAGECLASS   AGE
    block-claim   Bound    pvc-6765b493-84b3-427e-9bd9-45b9dae0444b   4Gi        RWO            ebs-sc         5s
    
    • [x] Confirm the vSphere CSI driver still works
    • [x] Smoke test aws-k8s-1.23-nvidia variant with some GPU workloads
    $ kubectl get jobs -o wide
    NAME                COMPLETIONS   DURATION   AGE     CONTAINERS   IMAGES                                                          SELECTOR
    nvidia-smoke-test   1/1           113s       2m42s   gpu-tests    359404537045.dkr.ecr.us-west-2.amazonaws.com/gpu-tests:latest
    
    • [x] metal K8s 1.23 testing with 1.22 control plane
    $ ./baremetal-conformance-test.sh   --control-plane-ip    --kubeconfig ./cluster-kubeconfigs/br-test-122-eks-a-cluster.kubeconfig   --hardware-manifest br-eksa-122-nodes.csv   --os-image-url  bottlerocket-metal-k8s-1.23-x86_64.img.gz  --test-mode certified-conformance
    Generating hardware manifests for the selected machines...
    Getting necessary information for Bottlerocket nodes to join the test cluster...
    Creating K8s bootstrap token for node registration...
    Creating tinkerbell workflow template for provisioning Bottlerocket...
    Waiting for tinkerbell workflows to finish...
    Resetting eksa-node10's power...
    Resetting eksa-node15's power...
    Resetting eksa-node17's power...
    Waiting for launched Bottlerocket nodes to become 'Ready' in br-test-122 cluster...
    eksa-node10 eksa-node15 eksa-node17 are ready in br-test-122.
    Starting Sonobuoy Kubernetes conformance test! Full 'certified-conformance' tests may take up to 1.5 hours to finish
    ...
       PLUGIN     STATUS   RESULT   COUNT
          e2e   complete   passed       1
    
    Sonobuoy has completed. Use `sonobuoy retrieve` to get results.
    Plugin: e2e
    Status: passed
    Total: 6434
    Passed: 346
    Failed: 0
    Skipped: 6088
    Sonobuoy test results available at br-test-122-conformance-test-results/br-test-122-conformance-results.tar.gz
    
    • [x] Check kubelet logs and system logs in general, compared them with the aws-k8s-1.23 variant kubelet logs:

    ... Others TBD

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by etungsten 19
  • ecs-agent: bump to 1.48.1

    ecs-agent: bump to 1.48.1

    Issue number:

    n/a

    Description of changes:

    Update ecs-agent to v1.48.1

    Testing done:

    @srgothi92 was able to run ECS tests against a build with these changes. The upgrade/downgrade test he ran failed on the downgrade portion - we expected this. The next release, which includes the change, will be a one-way-only upgrade as the underlying datastore is migrated and does not support a seamless rollback. Operators that want to rollback will need to take steps appropriate for their container instances and its resources.

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by jahkeup 19
  • pubsys: add `validate-repo`, `check-repo-expirations`, `refresh-repo` subcommands

    pubsys: add `validate-repo`, `check-repo-expirations`, `refresh-repo` subcommands

    Issue number: Fixes https://github.com/bottlerocket-os/bottlerocket/issues/1117

    Description of changes:

    Author: Erikson Tung <[email protected]>
    Date:   Mon Sep 21 18:06:26 2020 -0700
    
        pubsys: add `validate-repo` for validating TUF repositories
        
        Adds a new pubsys subcommand `validate-repo` for validating a set of TUF
        repositories.
        
        Adds a new cargo make task `validate-repo` that invokes this new subcommand.
    
    
    Author: Erikson Tung <[email protected]>
    Date:   Wed Sep 23 14:52:07 2020 -0700
    
        pubsys: add `check-repo-expirations` to check metadata expirations
        
        Adds new pubsys subcommand `check-repo-expirations` that checks for
        repository metadata expirations within a specified timeframe
        
        Adds a new Makefile.toml task that invokes `check-repo-expirations`
    
    Author: Erikson Tung <[email protected]>
    Date:   Thu Sep 24 15:39:27 2020 -0700
    
        pubsys: add `refresh-repo` to refresh and resign TUF repositories
        
        Adds a new subcommand `refresh-repo` for refreshing and resigning
        non-root metadata files of TUF repositories.
        
        Add new Makefile.toml task `refresh-repo` for refreshing the expiration
        dates of TUF repositories' metadata files
    

    Testing done:

    Validating repositories Normal validation run via cargo make validate-repo where 2020-07-07/aws-ecs-1/aarch64 is successfully loaded and its targets successfully retrieved.

    $ cargo make -e PUBLISH_REPO="2020-07-07" -e BUILDSYS_VARIANT="aws-ecs-1" -e BUILDSYS_ARCH="aarch64" validate-repo
    [cargo-make] INFO - cargo make 0.32.5
    [cargo-make] INFO - Build File: Makefile.toml
    [cargo-make] INFO - Task: validate-repo
    [cargo-make] INFO - Profile: development
    [cargo-make] INFO - Running Task: setup
    [cargo-make] INFO - Running Task: fetch-sources
    [cargo-make] INFO - Running Task: tuftool
    [cargo-make] INFO - Running Task: publish-setup-tools
    [cargo-make] INFO - Running Task: publish-setup-without-key
    00:42:54 [INFO] Found infra config at path: /home/etung/thar/Infra.toml
    [cargo-make] INFO - Running Task: publish-tools
    [cargo-make] INFO - Running Task: validate-repo
    00:43:47 [INFO] Using infra config from path: /home/etung/thar/Infra.toml
    00:43:48 [INFO] Loaded TUF repo: https://updates.bottlerocket.aws/2020-07-07/aws-ecs-1/aarch64
    00:43:48 [INFO] Downloading 100% of listed targets from https://updates.bottlerocket.aws/2020-07-07/aws-ecs-1/aarch64
    00:43:48 [INFO] Downloading target: bottlerocket-aws-ecs-1-aarch64-1.0.0-b0e2bc22-boot.ext4.lz4
    00:43:49 [INFO] Downloading target: bottlerocket-aws-ecs-1-aarch64-1.0.1-2a181156-root.verity.lz4
    00:43:49 [INFO] Downloading target: migrate_v0.4.1_add-version-lock-ignore-waves.lz4
    00:43:49 [INFO] Downloading target: migrate_v1.0.0_ecr-helper-admin.lz4
    00:43:49 [INFO] Downloading target: migrate_v1.0.0_ecr-helper-control.lz4
    00:43:49 [INFO] Downloading target: bottlerocket-aws-ecs-1-aarch64-1.0.0-b0e2bc22-root.ext4.lz4
    00:43:58 [INFO] Downloading target: migrate_v0.5.0_admin-container-v0-5-2.lz4
    00:43:58 [INFO] Downloading target: migrate_v0.5.0_control-container-v0-4-1.lz4
    00:43:59 [INFO] Downloading target: bottlerocket-aws-ecs-1-aarch64-1.0.1-2a181156-boot.ext4.lz4
    00:44:00 [INFO] Downloading target: manifest.json
    00:44:00 [INFO] Downloading target: bottlerocket-aws-ecs-1-aarch64-1.0.0-b0e2bc22-root.verity.lz4
    00:44:00 [INFO] Downloading target: migrate_v0.4.1_pivot-repo-2020-07-07.lz4
    00:44:00 [INFO] Downloading target: bottlerocket-aws-ecs-1-aarch64-1.0.1-2a181156-root.ext4.lz4
    00:44:09 [INFO] Downloading target: migrate_v0.5.0_add-cluster-domain.lz4
    00:44:09 [INFO] Downloading target: migrate_v0.3.2_admin-container-v0-5-0.lz4
    [cargo-make] INFO - Build Done in 84 seconds.
    

    Test validation run for 2020-07-07 repositories where I disable internet access halfway through to test error handling.

    $ cargo make -e PUBLISH_REPO="2020-07-07" -e BUILDSYS_VARIANT="aws-k8s-1.17" -e BUILDSYS_ARCH="x86_64" validate-repo
    [cargo-make] INFO - cargo make 0.32.7
    [cargo-make] INFO - Build File: Makefile.toml
    [cargo-make] INFO - Task: validate-repo
    [cargo-make] INFO - Profile: development
    [cargo-make] INFO - Running Task: setup
    [cargo-make] INFO - Running Task: fetch-sources
    [cargo-make] INFO - Running Task: tuftool
    [cargo-make] INFO - Running Task: publish-setup-tools
    [cargo-make] INFO - Running Task: publish-setup
    17:05:16 [INFO] Found infra config at path: /Infra.toml
    [cargo-make] INFO - Running Task: setup
    [cargo-make] INFO - Running Task: fetch-sources
    [cargo-make] INFO - Running Task: publish-tools
    [cargo-make] INFO - Running Task: validate-repo
    17:05:17 [INFO] Using infra config from path: /Infra.toml
    17:05:20 [INFO] Loaded TUF repo: https://updates.bottlerocket.aws/2020-07-07/aws-k8s-1.17/x86_64
    17:05:20 [INFO] Attempting to download 100% of listed targets from https://updates.bottlerocket.aws/2020-07-07/aws-k8s-1.17/x86_64 for validation purposes
    17:05:20 [INFO] Downloading target: bottlerocket-aws-k8s-1.17-x86_64-0.5.0-e0ddf1b-boot.ext4.lz4
    17:05:20 [INFO] Downloading target: migrate_v0.4.1_pivot-repo-2020-07-07.lz4
    17:05:21 [INFO] Downloading target: bottlerocket-aws-k8s-1.17-x86_64-0.5.0-e0ddf1b-root.ext4.lz4
    17:05:21 [INFO] Downloading target: bottlerocket-aws-k8s-1.17-x86_64-1.0.1-2a181156-root.verity.lz4
    17:05:22 [INFO] Downloading target: bottlerocket-aws-k8s-1.17-x86_64-1.0.1-2a181156-boot.ext4.lz4
    17:05:22 [INFO] Downloading target: migrate_v0.5.0_admin-container-v0-5-2.lz4
    17:05:23 [INFO] Downloading target: bottlerocket-aws-k8s-1.17-x86_64-1.0.0-b0e2bc22-root.verity.lz4
    17:05:23 [INFO] Downloading target: migrate_v0.4.1_add-version-lock-ignore-waves.lz4
    17:05:23 [INFO] Downloading target: migrate_v0.3.2_admin-container-v0-5-0.lz4
    17:05:24 [INFO] Downloading target: migrate_v1.0.0_ecr-helper-control.lz4
    17:05:24 [INFO] Downloading target: migrate_v0.5.0_control-container-v0-4-1.lz4
    Failed to validate repository: Failed to read target 'manifest.json' from repo: Failed to fetch https://updates.bottlerocket.aws/targets/004c373cb0899ed114a94692e245c4e45eeaa74155470de74b9a234c14920800.manifest.json: Could not fetch repo at 'https://updates.bottlerocket.aws/targets/004c373cb0899ed114a94692e245c4e45eeaa74155470de74b9a234c14920800.manifest.json': Failed to fetch 'https://updates.bottlerocket.aws/targets/004c373cb0899ed114a94692e245c4e45eeaa74155470de74b9a234c14920800.manifest.json' after 4 tries, final error: error sending request for url (https://updates.bottlerocket.aws/targets/004c373cb0899ed114a94692e245c4e45eeaa74155470de74b9a234c14920800.manifest.json): operation timed out
    [cargo-make] ERROR - Error while executing command, exit code: 1
    [cargo-make] WARN - Build Failed.
    
    
    

    Checking for repository non-root metadata expirations Successfully ran the check-repo-expirations on 2020-02-02 prefixed Bottlerocket TUF repositories.

    $ cargo make -e PUBLISH_REPO="2020-02-02" -e BUILDSYS_VARIANT="aws-k8s-1.17" -e BUILDSYS_ARCH="x86_64" -e check-repo-expirations
    [cargo-make] INFO - cargo make 0.32.5
    [cargo-make] INFO - Build File: Makefile.toml
    [cargo-make] INFO - Task: check-repo-expirations
    [cargo-make] INFO - Profile: development
    [cargo-make] INFO - Running Task: setup
    [cargo-make] INFO - Running Task: fetch-sources
    [cargo-make] INFO - Running Task: tuftool
    [cargo-make] INFO - Running Task: publish-setup-tools
    [cargo-make] INFO - Running Task: publish-setup-without-key
    21:05:45 [INFO] Found infra config at path: /home//Infra.toml
    [cargo-make] INFO - Running Task: publish-tools
    [cargo-make] INFO - Running Task: check-repo-expirations
    21:05:45 [INFO] Using infra config from path: /home/Infra.toml
    21:05:49 [INFO] Loaded TUF repo:	https://updates.bottlerocket.aws/2020-02-02/aws-k8s-1.17/x86_64
    21:05:49 [INFO] Root expiration:	2021-09-22 00:00:00 UTC
    21:05:49 [INFO] Snapshot expiration:	2020-10-21 17:52:25.534814597 UTC
    21:05:49 [INFO] Targets expiration:	2020-10-21 17:52:25.534815167 UTC
    21:05:49 [INFO] Timestamp expiration:	2020-10-14 17:52:25.534815678 UTC
    21:05:49 [INFO] Looking for metadata expirations happening from now to 2020-10-11 21:05:45.995390949 UTC
    [cargo-make] INFO - Build Done in 5 seconds.
    

    Specifying --upcoming-expirations-in 10 will cause pubsys to list timestamp metadata files in upcoming expirations, which is expected.

    $ cargo make -e PUBLISH_REPO="2020-02-02" -e BUILDSYS_VARIANT="aws-k8s-1.17" -e BUILDSYS_ARCH="x86_64" -e EXPIRING_WITHIN='10 days' check-repo-expirations
    [cargo-make] INFO - cargo make 0.32.5
    [cargo-make] INFO - Build File: Makefile.toml
    [cargo-make] INFO - Task: check-repo-expirations
    [cargo-make] INFO - Profile: development
    [cargo-make] INFO - Running Task: setup
    [cargo-make] INFO - Running Task: fetch-sources
    [cargo-make] INFO - Running Task: tuftool
    [cargo-make] INFO - Running Task: publish-setup-tools
    [cargo-make] INFO - Running Task: publish-setup-without-key
    21:07:35 [INFO] Found infra config at path: /home//Infra.toml
    [cargo-make] INFO - Running Task: publish-tools
    [cargo-make] INFO - Running Task: check-repo-expirations
    21:07:36 [INFO] Using infra config from path: /home//Infra.toml
    21:07:38 [INFO] Loaded TUF repo:	https://updates.bottlerocket.aws/2020-02-02/aws-k8s-1.17/x86_64
    21:07:38 [INFO] Root expiration:	2021-09-22 00:00:00 UTC
    21:07:38 [INFO] Snapshot expiration:	2020-10-21 17:52:25.534814597 UTC
    21:07:38 [INFO] Targets expiration:	2020-10-21 17:52:25.534815167 UTC
    21:07:38 [INFO] Timestamp expiration:	2020-10-14 17:52:25.534815678 UTC
    21:07:38 [INFO] Looking for metadata expirations happening from now to 2020-10-18 21:07:36.089668869 UTC
    21:07:38 [WARN] Repo 'https://updates.bottlerocket.aws/2020-02-02/aws-k8s-1.17/x86_64': 'timestamp' expiring between now and 2020-10-18 21:07:36.089668869 UTC on 2020-10-14 17:52:25.534815678 UTC
    Found expiring/expired metadata in 'https://updates.bottlerocket.aws/2020-02-02/aws-k8s-1.17/x86_64'
    [cargo-make] ERROR - Error while executing command, exit code: 1
    [cargo-make] WARN - Build Failed.
    

    Refreshing and resigning repository non-root metadata files Successfully refreshed and resigned metadata for a test TUF repository using a given expiration policy

    $ cargo make -e PUBLISH_REPO="etung" -e BUILDSYS_VARIANT="aws-k8s-1.17" -e BUILDSYS_ARCH="aarch64" -e UNSAFE_REFRESH=true refresh-repo
    [cargo-make] INFO - cargo make 0.32.5
    [cargo-make] INFO - Build File: Makefile.toml
    [cargo-make] INFO - Task: refresh-repo
    [cargo-make] INFO - Profile: development
    [cargo-make] INFO - Running Task: setup
    [cargo-make] INFO - Running Task: fetch-sources
    [cargo-make] INFO - Running Task: tuftool
    [cargo-make] INFO - Running Task: publish-setup-tools
    [cargo-make] INFO - Running Task: publish-setup
    20:31:50 [INFO] Found infra config at path: /home//Infra.toml
    [cargo-make] INFO - Running Task: publish-tools
    [cargo-make] INFO - Running Task: refresh-repo
    20:32:42 [INFO] Using infra config from path: /home/Infra.toml
    20:32:42 [INFO] Using repo expiration policy from path: /home/tools/pubsys/policies/repo-expiration/2w-2w-1w.toml
    20:32:45 [INFO] Loaded TUF repo: https://d31e403t2udbbh.cloudfront.net/aws-k8s-1.17/aarch64
    20:32:45 [INFO] Setting non-root metadata expiration times:
    	snapshot:  2020-10-22 20:32:45.590842865 UTC
    	targets:   2020-10-22 20:32:45.590842865 UTC
    	timestamp: 2020-10-15 20:32:45.590842865 UTC
    20:32:45 [INFO] Writing repo metadata to: /home/build/repos/etung/bottlerocket-1.0.2-5119490c/aws-k8s-1.17/aarch64
    [cargo-make] INFO - Build Done in 60 seconds.
    

    The TUF repository passes validation checks after uploading the refreshed & resigned metadata files

    $ cargo make -e PUBLISH_REPO="etung" -e BUILDSYS_VARIANT="aws-k8s-1.17" -e BUILDSYS_ARCH="x86_64" validate-repo
    [cargo-make] INFO - cargo make 0.32.5
    [cargo-make] INFO - Build File: Makefile.toml
    [cargo-make] INFO - Task: validate-repo
    [cargo-make] INFO - Profile: development
    [cargo-make] INFO - Running Task: setup
    [cargo-make] INFO - Running Task: fetch-sources
    [cargo-make] INFO - Running Task: tuftool
    [cargo-make] INFO - Running Task: publish-setup-tools
    [cargo-make] INFO - Running Task: publish-setup-without-key
    16:57:33 [INFO] Found infra config at path: /home/etung/thar/Infra.toml
    [cargo-make] INFO - Running Task: publish-tools
    [cargo-make] INFO - Running Task: validate-repo
    16:57:34 [INFO] Using infra config from path: /home/etung/thar/Infra.toml
    16:57:34 [INFO] Loaded TUF repo: https://asdasdasdasdasd.cloudfront.net/aws-k8s-1.17/x86_64
    16:57:34 [INFO] Downloading 100% of listed targets from https://asdasdasdasd.cloudfront.net/aws-k8s-1.17/x86_64
    16:57:34 [INFO] Downloading target: 2.txt
    16:57:34 [INFO] Downloading target: 1.txt
    [cargo-make] INFO - Build Done in 2 seconds.
    
    

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by etungsten 19
  • Console debugging experience when nothing works is not ideal

    Console debugging experience when nothing works is not ideal

    When trying to view the instance system/console log in the EC2 console:

    • [x] ANSI color sequences don't render, so it's very hard to read
    • [ ] systemd truncates long lines, but we'd rather have the full output
    • [x] Services in the boot path should probably have their stdout/stderr make it to the console as well
    opened by iliana 19
  • expose efi variables to privileged host containers

    expose efi variables to privileged host containers

    Issue number:

    #2501

    Description of changes: Build systemd with the "efi" option set, so that systemd will mount efivarfs on boot under /sys/firmware/efi/efivars.

    For privileged host containers, mount /sys/firmware with the "rbind" option so that child mounts such as efivarfs are also propagated.

    Testing done: Confirmed that /sys/firmware/efi/efivars shows up in the admin container, and that it can be read from and written to by commands like mokutil.

    [bottlerocket@admin]$ mokutil --sb-state
    SecureBoot disabled
    

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by bcressey 0
  • avoid running the cache workflow on forks

    avoid running the cache workflow on forks

    Platform I'm building on: GitHub

    What I expected to happen: I pushed the latest commits to my Bottlerocket fork and did not expect any actions to run.

    What actually happened: I received an email about a failed run of the cache workflow.

    How to reproduce the problem: I expect this happens for every fork of Bottlerocket. We should add a conditional so that the cache workflow only runs for the main repo.

    type/bug status/research area/tools 
    opened by bcressey 0
  • Move issue templates to issue forms

    Move issue templates to issue forms

    Issue number: n/a

    Closes # n/a

    Description of changes:

    This PR moves Bottlerocket from issue templates to issue forms. This adds form controls to the issue as well as some basic validation.

    Additionally, this adds two new types of issues:

    • General
    • Documentation

    Testing done:

    Due to limitations in the GitHub interface, it has passed validation. Otherwise testing is visual only. You can see the end result by looking at the develop branch of my fork.

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by stockholmux 0
  • build(deps): bump tokio from 1.23.0 to 1.23.1 in /tools

    build(deps): bump tokio from 1.23.0 to 1.23.1 in /tools

    Bumps tokio from 1.23.0 to 1.23.1.

    Release notes

    Sourced from tokio's releases.

    Tokio v1.23.1

    This release forward ports changes from 1.18.4.

    Fixed

    • net: fix Windows named pipe server builder to maintain option when toggling pipe mode (#5336).

    #5336: tokio-rs/tokio#5336

    Commits
    • 1a997ff chore: prepare Tokio v1.23.1 release
    • a8fe333 Merge branch 'tokio-1.20.x' into tokio-1.23.x
    • ba81945 chore: prepare Tokio 1.20.3 release
    • 763bdc9 ci: run WASI tasks using latest Rust
    • 9f98535 Merge remote-tracking branch 'origin/tokio-1.18.x' into fix-named-pipes-1.20
    • 9241c3e chore: prepare Tokio v1.18.4 release
    • 699573d net: fix named pipes server configuration builder
    • See full diff in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies rust 
    opened by dependabot[bot] 0
  • build(deps): bump tokio from 1.14.1 to 1.18.4 in /sources

    build(deps): bump tokio from 1.14.1 to 1.18.4 in /sources

    Bumps tokio from 1.14.1 to 1.18.4.

    Release notes

    Sourced from tokio's releases.

    Tokio v1.18.3

    1.18.3 (September 27, 2022)

    This release removes the dependency on the once_cell crate to restore the MSRV of the 1.18.x LTS release. (#5048)

    #5048: tokio-rs/tokio#5048

    Tokio v1.18.2

    1.18.2 (May 5, 2022)

    Add missing features for the winapi dependency. (#4663)

    #4663: tokio-rs/tokio#4663

    Tokio v1.18.1

    1.18.1 (May 2, 2022)

    The 1.18.0 release broke the build for targets without 64-bit atomics when building with tokio_unstable. This release fixes that. (#4649)

    #4649: tokio-rs/tokio#4649

    Tokio v1.18.0

    1.18.0 (April 27, 2022)

    This release adds a number of new APIs in tokio::net, tokio::signal, and tokio::sync. In addition, it adds new unstable APIs to tokio::task (Ids for uniquely identifying a task, and AbortHandle for remotely cancelling a task), as well as a number of bugfixes.

    Fixed

    • blocking: add missing #[track_caller] for spawn_blocking (#4616)
    • macros: fix select macro to process 64 branches (#4519)
    • net: fix try_io methods not calling Mio's try_io internally (#4582)
    • runtime: recover when OS fails to spawn a new thread (#4485)

    Added

    • net: add UdpSocket::peer_addr (#4611)
    • net: add try_read_buf method for named pipes (#4626)
    • signal: add SignalKind Hash/Eq impls and c_int conversion (#4540)
    • signal: add support for signals up to SIGRTMAX (#4555)
    • sync: add watch::Sender::send_modify method (#4310)
    • sync: add broadcast::Receiver::len method (#4542)
    • sync: add watch::Receiver::same_channel method (#4581)
    • sync: implement Clone for RecvError types (#4560)

    Changed

    ... (truncated)

    Commits

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    • @dependabot use these labels will set the current labels as the default for future PRs for this repo and language
    • @dependabot use these reviewers will set the current reviewers as the default for future PRs for this repo and language
    • @dependabot use these assignees will set the current assignees as the default for future PRs for this repo and language
    • @dependabot use this milestone will set the current milestone as the default for future PRs for this repo and language

    You can disable automated security fix PRs for this repo from the Security Alerts page.

    dependencies rust 
    opened by dependabot[bot] 0
  • variants: move wicked out of release

    variants: move wicked out of release

    Issue number: 2449

    Closes #

    Description of changes: We need to move wicked out of release so that we can toggle what network backend a variant uses while working on networkd enablement. This makes it possible to remove wicked from one variant without affecting others or have a variant just for networkd testing. This commit also makes the variant Cargo.toml ordering consistent with all variants for clarity and consistency.

    The diff is lousy because of the shifting of the files. The only changes should be the addition of wicked to packages included and wicked for dependencies. The rest is commenting and sorting to get them all consistent.

    Testing done: Built all the variants. This should not actually affect the resulting images at all.

    Terms of contribution:

    By submitting this pull request, I agree that this contribution is dual-licensed under the terms of both the Apache License, version 2.0, and the MIT license.

    opened by yeazelm 1
Releases(v1.11.1)
  • v1.11.1(Nov 30, 2022)

    Security Fixes

    • Update NVIDIA driver for 5.10 and 5.15 to include recent security fixes (74d2c5c13ab0, 64f3967373a5)
    • Apply patch to systemd for CVE-2022-3821 (#2611)
    Source code(tar.gz)
    Source code(zip)
  • v1.11.0(Nov 17, 2022)

    OS Changes

    • Prevent a panic in early-boot-config when there is no IMDS region (#2493)
    • Update grub to 2.06-42 (#2503)
    • Bring back wicked support for matching interfaces via hardware address (#2519)
    • Allow bootstrap containers to manage swap (#2537)
    • Add systemd-analyze commands to troubleshooting log collection tool (#2550)
    • Allow bootstrap containers to manage network configuration (#2558)
    • Serialize bootconfig values correctly when the value is empty (#2565)
    • Update zlib, libexpat, libdbus, docker-cli (#2583)
    • Update host containers (#2574)
    • Unmask /sys/firmware from host containers (#2573)

    Orchestrator Changes

    ECS

    • Add additional ECS API configurations (#2527)
      • ECS_CONTAINER_STOP_TIMEOUT
      • ECS_ENGINE_TASK_CLEANUP_WAIT_DURATION
      • ECS_TASK_METADATA_RPS_LIMIT
      • ECS_RESERVED_MEMORY

    Kubernetes

    • Add a timeout when calling EKS for configuration values (#2566)
    • Enable IAM Roles Anywhere with the k8s ecr-credential-provider plugin (#2377, #2553)
    • Kubernetes EKS-D updates

    Platform Changes

    AWS

    • Add driver support for AWS variants in hybrid environments (#2554)

    Build Changes

    • Add support for publishing to AWS organizations (#2484)
    • Remove unnecessary dependencies when building grub (#2495)
    • Switch to the latest Dockerfile frontend for builds (#2496)
    • Prepare foundations for Secure Boot and image re-signing (#2505)
    • Fix EFI file system to fit partition size (#2528)
    • Add ShellCheck to check-lints for build scripts (#2532)
    • Update the SDK to v0.28.0 (#2543)
    • Use rustls-native-certs instead of webpki-roots (#2551)
    • Handle absolute paths for output directory in kernel build script (#2563)

    Documentation Changes

    • Add a Roadmap markdown file (#2549)
    Source code(tar.gz)
    Source code(zip)
  • v1.10.1(Oct 19, 2022)

    OS Changes

    • Support container runtime settings: enable-unprivileged-icmp, enable-unprivileged-ports, max-concurrent-downloads, max-container-log-line-size (#2494)
    • Update EKS-D to 1.22-11 (#2490)
    • Update EKS-D to 1.23-6 (#2488)
    Source code(tar.gz)
    Source code(zip)
  • v1.10.0(Oct 13, 2022)

    OS Changes

    • Add optional settings to reboot into new kernel command line parameters (#2375)
    • Support for static IP addressing (#2204, #2330, #2445)
    • Add support for NVIDIA driver version 515 (#2455)
    • Set mode for tmpfs mounts (#2473)
    • Increase inotify default limits (#2335)
    • Align vm.max_map_count with the EKS Optimized AMI (#2344)
    • Add support for configuring DNS settings (#2353)
    • Migrate netdog from serde_xml_rs to quick-xml (#2311)
    • Support versioning for net.toml (#2281)
    • Update admin and control container (#2471, #2472)

    Orchestrator Changes

    ECS

    • Add cargo make tasks for testing ECS variants (#2348)

    Kubernetes

    • Add support for Kubernetes 1.24 variants (#2437)
    • Remove Kubernetes aws-k8s-1.19 variants (#2316)
    • Increase the kube-api-server QPS from 5/10 to 10/20 (#2436, thanks @tzneal)
    • Update eni-max-pods with new instance types (#2416)
    • Add setting to change kubelet's log level (#2460, #2470)
    • Add cargo make tasks to perform migration testing for Kubernetes variants in AWS (#2273)

    Platform Changes

    AWS

    • Disable drivers for USB-attached network interfaces (#2328)

    Metal

    • Add driver support for Solarflare, Pensando, Myricom, Huawei, Emulex, Chelsio, Broadcom, AMD and Intel 10G+ network cards (#2379)

    Build Changes

    • Extend external-files to vendor go modules (#2378, #2403, #2430)
    • Make net_config unit tests reusable across versions (#2385)
    • Add diff-kernel-config to identify kernel config changes (#2368)
    • Extended support for variants in buildsys (#2339)
    • Clarify crossbeam license (#2447)
    • Honor BUILDSYS_ARCH and BUILDSYS_VARIANT env variables when set (#2425)
    • Use architecture specific json payloads in unit tests (#2367, #2363)
    • Add unified check target in Makefile.toml for review readiness (#2384)
    • Update Go dependencies of first-party go projects (#2424, #2440, #2450, #2452, #2456)
    • Update Rust dependencies (#2458, #2476)
    • Update third-party packages (#2397, #2398, #2464, #2465, thanks @kschumy)
    • Update Bottlerocket SDK to 0.27.0 (#2428)
    • Migrate pubsys and infrasys to the AWS SDK for Rust (#2414, #2415, #2454)
    • Update testsys dependencies (#2392)
    • Fix hotdog's spec URL to the correct upstream link (#2326)
    • Fix clippy warnings and enable lints on pull requests (#2337, #2346, #2443)
    • Format issue field in PR template (#2314)

    Documentation Changes

    • Update checksum for new root.json (#2405)
    • Mention that boot settings are available in Kubernetes 1.23 variants (#2358)
    • Mention the need for AWS credentials in BUILDING.md and PUBLISHING-AWS.md (#2334)
    • Add China to supported regions lists (#2315)
    • Add community section to README.md (#2305, #2383)
    • Standardize userdata.toml as the filename used in different docs (#2446)
    • Remove commit from image name in PROVISIONING-METAL.md (#2312)
    • Add note to CONTRIBUTING.md that outlines filenames' casing (#2306)
    • Fix typos in Makefile.toml, QUICKSTART-ECS.md, QUICKSTART-EKS.md, netdog and prairiedog (#2318, thanks @kianmeng)
    • Fix casing for GitHub and VMware in CHANGELOG.md (#2329)
    • Fix typo in test setup command (#2477)
    • Fix TESTING.md link typo (#2438)
    • Fix positional fetch-license argument (#2457)
    Source code(tar.gz)
    Source code(zip)
  • v1.9.2(Aug 31, 2022)

  • v1.9.1(Aug 19, 2022)

  • v1.9.0(Jul 29, 2022)

    OS Changes

    • SELinux policy now suppresses audit for tmpfs relabels (#2222)
    • Restrict permissions for /boot and System.map (#2223)
    • Remove unused crates growpart and servicedog (#2238)
    • New mount in host containers for system logs (#2295)
    • Apply strict mount options and enforce execution rules (#2239)
    • Switch to a more commonly used syntax for disabling kernel config settings (#2290)
    • Respect proxy settings when running setting generators (#2227)
    • Add NET_CAP_ADMIN to bootstrap containers (#2266)
    • Reduce log output for DHCP services (#2260)
    • Fix invalid kernel config options (#2269)
    • Improve support for container storage mounts (#2240)
    • Disable uncommon filesystems and network protocols (#2255)
    • Add support for blocking kernel modules (#2274)
    • Fix ntp service restart when settings change (#2270)
    • Add kernel 5.15 sources (#2226)
    • Defer squashfs mounts to later in the boot process (#2276)
    • Improve boot speed and rootfs size (#2296)
    • Add "quiet" kernel parameter for some variants (#2277)

    Orchestrator Changes

    Kubernetes

    • Make new instance types available (#2221 , thanks @cablespaghetti)
    • Update Kubernetes versions (#2230, #2232, #2262, #2263, thanks @kschumy)
    • Add kubelet image GC threshold settings (#2219)

    ECS

    • Add iptables rules for ECS introspection server (#2267)

    Platform Changes

    AWS

    • Add support for AWS China regions (#2224, #2242, #2247, #2285)
    • Migrate to using aws-sdk-rust for first-party OS Rust packages (#2300)

    VMWare

    • Remove console=ttyS0 from kernel params (#2248)

    Metal

    • Enable Mellanox modules in 5.10 kernel (#2241)
    • Add bnxt module for Broadcom 10/25Gb network adapters in 5.10 kernel (#2243)
    • Split out baremetal specific config options (#2264)
    • Add driver support for Cisco UCS platforms (#2271)
    • Only build baremetal variant specific drivers for baremetal variants (#2279)
    • Enable the metal-dev build for the ARM architecture (#2272)

    Build Changes

    • Add Makefile targets to create and validate Boot Configuration (#2189)
    • Create symlinks to images with friendly names (#2215)
    • Add start-local-vm script (#2194)
    • Add the testsys CLI and new cargo make tasks for testing aws-k8s variants (#2165)
    • Update Rust and Go dependencies (#2303, #2299)
    • Update third-party packages (#2309)

    Documentation Changes

    • Add NVIDIA ECS variant to README (#2244)
    • Add documentation for metal variants (#2205)
    • Add missing step in building packages guide (#2259)
    • Add quickstart for running Bottlerocket in QEMU/KVM VMs (#2280)
    • Address lints in README markdown caught by markdownlint (#2283)
    Source code(tar.gz)
    Source code(zip)
  • v1.8.0(Jun 10, 2022)

    OS Changes

    General

    • Update admin and control containers (#2191)
    • Update to containerd 1.6.x (#2158)
    • Restart container runtimes when certificates store changes (#2076)
    • Add support for providing kernel parameters via Boot Configuration (#1980)
    • Restart long-running systemd services on exit (#2162)
    • Ignore zero blocks on dm-verity root (#2169)
    • Add support for static DNS mappings in /etc/hosts (#2129)
    • Enable network configuration generation via netdog (#2066)
    • Add support for non-eth0 default interfaces (#2144)
    • Update to IMDS schema 2021-07-15 (#2190)

    Kubernetes

    • Add support for Kubernetes 1.23 variants (#2188)
    • Improve Kubernetes pod start times by unsetting configMapAndSecretChangeDetectionStrategy in kubelet config (#2166)
    • Add new setting for configuring kubelet's provider-id configuration (#2192)
    • Add new setting for configuring kubelet's podPidsLimit configuration (#2138)
    • Allow a list of IP addresses in settings.kubernetes.cluster-dns-ip (#2176)
    • Set the default for settings.kubernetes.cloud-provider on metal variants to an empty string (#2188)
    • Add c7g instance data for max pods calculation in AWS variants (#2107, thanks, @lizthegrey!)

    ECS

    Hardware

    • Build smartpqi driver for Microchip Smart Storage devices into 5.10 kernel (#2184)
    • Add support for Broadcom ethernet cards in 5.10 kernel (#2143)
    • Add support for MegaRAID SAS in 5.10 kernel (#2133)

    Build Changes

    Documentation Changes

    • Standardize README generation in buildsys (#2134)
    • Clarify migration README (#2141)
    • Fix typos in BUILDING.md and QUICKSTART-VMWARE.md (#2159, thanks, @ryanrussell!)
    • Add additional documentation for using GPUs with Kubernetes variants (#2078)
    • Document examples for using enter-admin-container (#2028)
    Source code(tar.gz)
    Source code(zip)
  • v1.7.2(Apr 25, 2022)

    Security Fixes

    • Update kernel-5.4 to patch CVE-2022-1015, CVE-2022-1016, CVE-2022-25636, CVE-2022-26490, CVE-2022-27666, CVE-2022-28356 (a3b4674f7108)
    • Update kernel-5.10 to patch CVE-2022-1015, CVE-2022-1016, CVE-2022-25636, CVE-2022-1048, CVE-2022-26490, CVE-2022-27666, CVE-2022-28356 (37095415bab6)

    OS Changes

    • Update eni-max-pods with new instance types (#2079)
    • Add support for AWS region ap-southeast-3: Jakarta (#2080)
    Source code(tar.gz)
    Source code(zip)
  • v1.7.1(Apr 19, 2022)

  • v1.7.0(Mar 30, 2022)

    With this release, an inventory of software installed in Bottlerocket will now be reported to SSM if the control container is in use and inventorying has been enabled.

    OS Changes

    • Generate host software inventory and make it available to host containers (#1996)
    • Update admin and control containers (#2014)

    Build Changes

    Documentation Changes

    • Fix tuftool download instruction in VMWare Quickstart (#1994)
    • Explain data partition extension (#2013)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.2(Mar 9, 2022)

    With this release, the vmware-k8s variants have graduated from preview status and are now generally available. :tada:

    Security Fixes

    OS Changes

    • Add support for Kubernetes 1.22 variants (#1962)
    • Add settings support for registry credentials (#1955)
    • Add support for AWS CloudFormation signaling (#1728, thanks, @mello7tre!)
    • Add TCMU support to the kernel (#1953, thanks, @cvlc!)
    • Fix issue with closing frame construction in apiserver (#1948)

    Build Changes

    • Fix dead code warning during build in netdog (#1949)

    Documentation Changes

    • Correct variable name in bootstrap-containers/README.md (#1959, thanks, @dangen-effy!)
    • Add art to the console (#1970)
    Source code(tar.gz)
    Source code(zip)
  • v1.6.1(Mar 2, 2022)

  • v1.6.0(Feb 8, 2022)

    Deprecation Notice

    The Kubernetes 1.18 variant, aws-k8s-1.18, will lose support in March 2022. Kubernetes 1.18 is no longer receiving support upstream. We recommend replacing aws-k8s-1.18 nodes with a later variant, preferably aws-k8s-1.21 if your cluster supports it. See this issue for more details.

    Security Fixes

    • Apply patch to the kernel for CVE-2022-0492 (#1943)

    OS Changes

    • Add aws-k8s-1.21-nvidia variant with Nvidia driver support (#1859, #1860, #1861, #1862, #1900, #1912, #1915, #1916, #1928)
    • Add metal-k8s-1.21 variant with support for running on bare metal (#1904)
    • Update host containers to the latest version (#1939)
    • Add driverdog, a configuration-driven utility for linking kernel modules at runtime (#1867)
    • Kubernetes: Fix a potential inconsistency with IPv6 node-ip comparisons (#1932)
    • Allow setting multiple Kubernetes node taints with the same key (#1906)
    • Fix a bug which would prevent Bottlerocket from booting when setting container-registry to an empty table (#1910)
    • Add /etc/bottlerocket-release to host containers (#1883)
    • Send grub output to the local console on BIOS systems (#1894)
    • Fix minor issues with systemd units (#1889)

    Build Changes

    • Update third-party packages (#1936)
    • Update Rust dependencies (#1940)
    • Update Go dependencies of host-ctr (#1938)
    • Add the ability to fetch licenses at build time (#1901)
    • Pin tuftool to a specific version (#1940)

    Documentation Changes

    • Add a no-proxy setting example to the README (#1765 thanks, @mrajashree!)
    • Document variant image-layout options in the README (#1896)
    Source code(tar.gz)
    Source code(zip)
  • v1.5.3(Jan 25, 2022)

    Security Fixes

    • Update Bottlerocket SDK to 0.25.1 for Rust 1.58.1 (#1918)
    • Update kernel-5.4 and kernel-5.10 to include recent security fixes (#1921)
    • Migrate host-container to the latest version for vmware variants (#1898)

    OS Changes

    • Fix an issue which could impair nodes in Kubernetes 1.21 IPv6 clusters (#1925)
    Source code(tar.gz)
    Source code(zip)
  • v1.5.2(Jan 5, 2022)

  • v1.5.1(Dec 24, 2021)

  • v1.5.0(Dec 18, 2021)

    Security Enhancements

    • Add the ability to hotpatch log4j for CVE-2021-44228 in running containers (#1872, #1871, #1869)

    OS Changes

    • Enable configuration for OCI hooks in the container lifecycle (#1868)
    • Retry all failed requests to IMDS (#1841)
    • Enable node feature discovery for Kubernetes device plugins (#1863)
    • Add apiclient get subcommand for simple API retrieval (#1836)
    • Add support for CPU microcode updates (#1827)
    • Consistently support API prefix queries (#1835)

    Build Changes

    • Add support for custom image sizes (#1826)
    • Add support for unifying the OS and data partitions on a single disk (#1870)

    Documentation Changes

    • Fixed typo in the README (#1847 thanks, @PascalBourdier!)
    Source code(tar.gz)
    Source code(zip)
  • v1.4.2(Dec 3, 2021)

  • v1.4.1(Nov 18, 2021)

  • v1.4.0(Nov 12, 2021)

    OS Changes

    • Add 'apiclient exec' for running commands in host containers (#1802, #1790)
    • Improve boot performance (#1809)
    • Add support for wildcard container registry mirrors (#1791, #1818)
    • Wait up to 300s for a DHCP lease at boot (#1800)
    • Retry if fetching the IMDS session token fails (#1801)
    • Add ECR account IDs for pulling host containers in GovCloud (#1793)
    • Filter sensitive API settings from logdog dump (#1777)
    • Fix kubelet standalone mode (#1783)

    Build Changes

    • Remove aws-k8s-1.17 variant (#1807)
    • Update Bottlerocket SDK to 0.23 (#1779)
    • Update third-party packages (#1816)
    • Update Rust dependencies (#1810)
    • Update Go dependencies of host-ctr (#1775, #1774)
    • Prevent spurious rebuilds of the model package (#1808)
    • Add disk image files to TUF repo (#1787)
    • Vendor wicked service units (#1798)
    • Add CI check for Rust code formatting (#1782)
    • Allow overriding the AMI data file suffix (#1784)

    Documentation Changes

    • Update cargo-make commands to work with newest cargo-make (#1797)
    Source code(tar.gz)
    Source code(zip)
  • v1.3.0(Oct 6, 2021)

    Deprecation Notice

    The Kubernetes 1.17 variant, aws-k8s-1.17, will lose support in November, 2021. Kubernetes 1.17 is no longer receiving support upstream. We recommend replacing aws-k8s-1.17 nodes with a later variant, preferably aws-k8s-1.21 if your cluster supports it. See this issue for more details.

    Security Fixes

    OS Changes

    • Add MCS constraints to the SELinux policy (#1733)
    • Support IPv6 in kubelet and pluto (#1710)
    • Add region flag to aws-iam-authenticator command (#1762)
    • Restart modified host containers (#1722)
    • Add more detail to /etc/os-release (#1749)
    • Add an entry to /etc/hosts for the current hostname (#1713, #1746)
    • Update default control container to v0.5.2 (#1730)
    • Fix various SELinux policy issues (#1729)
    • Update eni-max-pods with new instance types (#1724, thanks @samjo-nyang!)
    • Add cilium device filters to open-vm-tools (#1718)
    • Implement hybrid boot support for x86_64 (#1701)
    • Include /var/log/kdump in logdog tarballs (#1695)
    • Use runtime.slice and system.slice cgroup settings in k8s variants (#1684, thanks @cyrus-mc!)

    Build Changes

    • Update third-party packages (#1701, #1716, #1732, #1755, #1763, #1767)
    • Update Rust dependencies (#1707, #1750, #1751)
    • Add wave definition for slow deployment (#1734)
    • Add 'infrasys' for creating TUF infra in AWS (#1723)
    • Make OVF file first in the OVA bundle (#1719)
    • Raise pubsys messages to 'warn' if AMI exists or repo doesn't (#1708)
    • Add constants crate (#1709)
    • Add release URLs to package definitions (#1748)
    • Add *.src.rpm to packages/.gitignore (#1768)
    • Archive old migrations (#1699)

    Documentation Changes

    • Mention static pods in the security guidance around API access (#1766)
    • Fix link to issue labels (#1764, thanks @andrewhsu!)
    • Fix broken link for TLS bootstrapping (#1758)
    • Update hash for v3 root.json (#1757)
    • Update example version to v1.2.0 in QUICKSTART-VMWARE (#1741, thanks @yuvalk!)
    • Clarify default kernel lockdown settings per variant (#1704)
    Source code(tar.gz)
    Source code(zip)
  • v1.2.1(Sep 17, 2021)

  • v1.2.0(Aug 6, 2021)

    OS Changes

    • Add settings for kubelet topologyManagerPolicy and topologyManagerScope (#1659)
    • Add support for container image registry mirrors (#1629)
    • Add support for custom CA certificates (#1654)
    • Add a setting for configuring hostname (#1664, #1680, #1693)
    • Avoid wildcard for applying rp_filter to interfaces (#1677)
    • Update default admin container to v0.7.2 (#1685)

    Build Changes

    • Add support for zstd compressed kernel (#1668, #1689)
    • Add support for uploading OVAs to VMware (#1622)
    • Update default built variant to aws-k8s-1.21 (#1686)
    • Remove aws-k8s-1.16 variant (#1658)
    • Move migrations from v1.1.5 to v1.2.0 (#1682)
    • Update third-party packages (#1676)
    • Update host-ctr dependencies (#1669)
    • Update Rust dependencies (#1655, #1683, #1687)

    Documentation Changes

    • Fix typo in README (#1652, thanks @faultymonk!)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.4(Jul 23, 2021)

  • v1.1.3(Jul 12, 2021)

    Note: in the Bottlerocket v1.0.8 release, for the aws-k8s-1.20 and aws-k8s-1.21 variants, we set the default Kubernetes CPU manager policy to "static". We heard from several users that this breaks usage of the Fluent Bit log processor. In Bottlerocket v1.1.3, we've changed the default back to "none", but have added a setting so you can use the "static" policy if desired. To do so, set settings.kubernetes.cpu-manager-policy to "static". To do this in user data, for example, pass the following:

    [settings.kubernetes]
    cpu-manager-policy = "static"
    

    OS Changes

    • Fix parsing of lists of values in domain name search field of DHCP option sets (#1646, thanks @hypnoce!)
    • Add setting for configuring Kubernetes CPU manager policy and reconcile policy (#1638)

    Build Changes

    • Update SDK to 0.22.0 (#1640)
    • Store build artifacts per architecture (#1630)

    Documentation Changes

    • Update references to the ECS variant for GA release (#1637)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.2(Jun 25, 2021)

    With this release, the aws-ecs-1 variant has graduated from preview status and is now generally available. It's been updated to include Docker 20.10. :tada:

    OS Changes

    • Add aws-k8s-1.21 variant with Kubernetes 1.21 support (#1612)
    • Add settings for configuring kubelet containerLogMaxFiles and containerLogMaxSize (#1589) (Thanks, @samjo-nyang!)
    • Add settings for configuring kubelet systemReserved (#1606)
    • Add kdump support, enabled by default in VMware variants (#1596)
    • In host containers, allow mount propagations from privileged containers (#1601)
    • Mark ipv6 lease as optional for eth0 (#1602)
    • Add recommended device filters to open-vm-tools (#1603)
    • In host container definitions, default "enabled" and "superpowered" to false (#1580)
    • Allow pubsys refresh-repo to use default key path (#1575)
    • Update default host containers (#1609)

    Build Changes

    • Add grep package to all variants (#1562)
    • Update Rust dependencies (#1623, #1574)
    • Update third-party packages (#1619, #1616, #1625)
    • In GitHub Actions, pin rust toolchain to match version in SDK (#1621)
    • Add imdsclient library for querying IMDS (#1372, #1598, #1610)
    • Remove reqwest proxy workaround in metricdog and updog (#1592)
    • Simplify conditional compilation in early-boot-config (#1576)
    • Only build shibaken for aws variants (#1591)
    • Silence tokio mut warning in thar-be-settings (#1593)
    • Refactor package and variant dependencies (#1549)
    • Add derive attributes at start of list in model-derive (#1572)
    • Limit threads during pubsys validate-repo (#1564)

    Documentation Changes

    • Document the deprecation of the aws-k8s-1.16 variant (#1600)
    • Update README for VMware and add a QUICKSTART-VMWARE (#1559)
    • Add ap-northeast-3 to supported region list (#1566)
    • Add details about the two default Bottlerocket volumes to README (#1588)
    • Document webpki-roots version in webpki-roots-shim (#1565)
    Source code(tar.gz)
    Source code(zip)
  • v1.1.1(May 19, 2021)

  • v1.1.0(May 7, 2021)

    Deprecation Notice

    The Kubernetes 1.16 variant, aws-k8s-1.16, will lose support in July, 2021. Kubernetes 1.16 is no longer receiving support upstream. We recommend replacing aws-k8s-1.16 nodes with a later variant, preferably aws-k8s-1.19 if your cluster supports it. See this issue for more details.

    Important Notes

    New variants with new defaults

    This release introduces two new variants, aws-k8s-1.20 and vmware-k8s-1.20. We plan for all new variants, including these, to contain the following changes:

    • The kernel is Linux 5.10 rather than 5.4.
    • The kernel lockdown mode is set to "integrity" rather than "none".

    The ECS preview variant, aws-ecs-1, has also been updated with these changes.

    Existing aws-k8s variants will not receive these changes as they could affect existing workloads.

    ECS task networking

    The aws-ecs-1 variant now supports the awsvpc mode of ECS task networking. This allocates an elastic network interface and private IP address to each task.

    OS Changes

    • Add Linux kernel 5.10 for use in new variants (#1526)
    • Add aws-k8s-1.20 variant with Kubernetes 1.20 support (#1437, #1533)
    • Add vmware-k8s-1.20 variant with Kubernetes 1.20 for VMware (#1511, #1529, #1523, #1502, #1554)
    • Remove aws-k8s-1.15 variant (#1487, #1492)
    • Constrain ephemeral port range (#1560)
    • Support awsvpc networking mode in ECS (#1246)
    • Add settings for QPS and burst limits of Kubernetes registry pulls, event records, and API (#1527, #1532, #1541)
    • Add setting to allow configuration of Kubernetes TLS bootstrap (#1485)
    • Add setting for configuring Kubernetes cloudProvider to allow usage outside AWS (#1494)
    • Make Kubernetes cluster-dns-ip optional to support usage outside of AWS (#1482)
    • Change parameters to support healthy CIS scan (#1295) (Thanks, @felipeac!)
    • Generate stable machine IDs for VMware and ARM KVM guests (#1506, #1537)
    • Enable "integrity" kernel lockdown mode for aws-ecs-1 preview variant (#1530)
    • Remove override for default service start timeout (#1483)
    • Restrict access to bootstrap container user data with SELinux (#1496)
    • Split SELinux policy rules for trusted subjects (#1558)
    • Add symlink to allow usage of secrets store CSI drivers (#1544)
    • Prevent bootstrap containers from restarting (#1508)
    • Add udev rules to mount CD-ROM only when media is present (#1516)
    • Add resize2fs binary to sbin (#1519) (Thanks, @samjo-nyang!)
    • Only restart a host container if affected by settings change (#1480)
    • Support file patterns when specifying log files in logdog (#1509)
    • Daemonize thar-be-settings to avoid zombie processes (#1507)
    • Add support for AWS region ap-northeast-3: Osaka (#1504)
    • Generate pause container URI with standard template variables (#1551)
    • Get cluster DNS IP from cluster when available (#1547)

    Build Changes

    • Use kernel 5.10 in aws-ecs-1 variant (#1555)
    • Build only the packages needed for the current variant (#1408, #1520)
    • Use a friendly name for VMware OVA files in build outputs (#1535)
    • Update SDK to 0.21.0 (#1497, #1529)
    • Allow variants to specify extra kernel parameters (#1491)
    • Move kernel console settings to variant definitions (#1513)
    • Update vmw_backdoor dependency (#1498) (Thanks, @lucab!)
    • Archive old migrations (#1540)
    • Refactor default settings and containerd configs to shared files (#1538, #1542)
    • Check cargo version at start of build so we have a clear error when it's too low (#1503)
    • Fix concurrency issue in validate-repo that led to hangs (#1521)
    • Update third-party package dependencies (#1543, #1556)
    • Update Rust dependencies in the tools/ workspace (#1548)
    • Update tokio-related Rust dependencies in the sources/ workspace (#1479)
    • Add upstream runc patches addressing container scheduling failure (#1546)
    • Retry builds on known BuildKit internal errors (#1557, #1561)

    Documentation Changes

    • Document the deprecation of the aws-k8s-1.15 variant (#1476)
    • Document the need to quote most Kubernetes labels/taints (#1550) (Thanks, @ellistarn!)
    • Fix VMware spelling and document user data sources (#1534)
    Source code(tar.gz)
    Source code(zip)
  • v1.0.8(Apr 12, 2021)

    Deprecation Notice

    Bottlerocket 1.0.8 is the last release where we plan to support the Kubernetes 1.15 variant, aws-k8s-1.15. Kubernetes 1.15 is no longer receiving support upstream. We recommend replacing aws-k8s-1.15 nodes with a later variant, preferably aws-k8s-1.19 if your cluster supports it. See this issue for more details.

    OS Changes

    • Support additional kubelet arguments: kube-reserved, eviction-hard, cpu-manager-policy, and allow-unsafe-sysctls (#1388, #1472, #1465)
    • Expand file and process restrictions in the SELinux policy (#1464)
    • Add support for bootstrap containers (#1387, #1423)
    • Make host containers inherit proxy env vars (#1432)
    • Allow gzip compression of user data (#1366)
    • Add 'apply' mode to apiclient for applying settings from URIs (#1391)
    • Add compat symlink for kubelet volume plugins (#1417)
    • Remove bottlerocket.version attribute from ECS agent settings (#1395)
    • Make Kubernetes taint values optional (#1406)
    • Add guestinfo to available VMWare user data retrieval methods (#1393)
    • Include source of invalid base64 data in error messages (#1469)
    • Update eni-max-pods data file (#1468)
    • Update default host container versions (#1443, #1441, #1466)
    • Fix avc denial for dbus-broker (#1434)
    • Fix case of outputted JSON keys in host container user data (#1439)
    • Set mode of host container persistent storage directory after creation (#1463)
    • Add "current" persistent storage location for host containers (#1416)
    • Write static-pods manifest to tempfile before persisting it (#1409)

    Build Changes

    • Update default variant to aws-k8s-1.19 (#1394)
    • Update third-party packages (#1460)
    • Update Rust dependencies (#1461, #1462)
    • Update dependencies of host-ctr (#1371)
    • Add support for specifying a variant's supported architectures (#1431)
    • Build OVA packages and include them in repos (#1428)
    • Add support for qcow2 as an image format (#1425) (Thanks, @mikalstill!)
    • Prevent unneeded artifacts from being copied through build process (#1426)
    • Change image format for vmware-dev variant to vmdk (#1397)
    • Remove tough dependency from update_metadata (#1390)
    • Remove generate_constants logic from build.rs of parse-datetime (#1376)
    • In the tools workspace, update to tokio v1, reqwest v0.11, and tough v0.11 (#1370)
    • Run static and non-static Rust builds in parallel (#1368)
    • Disable CMDLINE_EXTEND kernel configuration (#1473)

    Documentation Changes

    • Document metrics settings in README (#1449)
    • Fix broken links for symlinked files in models README (#1444)
    • Document apiclient update as primary CLI update method (#1421)
    • Use apiclient set in introductory documentation, explain raw mode separately (#1418)
    • Prefer resolve:ssm: parameters for simplicity in QUICKSTART (#1363)
    • Update quickstart guides to have arm64 examples (#1360)
    • Document the deprecation of the aws-k8s-1.15 variant (#1476)
    Source code(tar.gz)
    Source code(zip)
Owner
null
Minimal and persistent key-value store designed with security in mind

microkv Minimal and persistent key-value store designed with security in mind. Introduction microkv is a persistent key-value store implemented in Rus

Alan 17 Jan 2, 2023
The Heros NFT Marketplace Boilerplate project is designed to let users fork, customize, and deploy their own nft marketplace app to a custom domain, ultra fast.

Heros NFT on Solana The Heros NFT Marketplace Boilerplate project is designed to let users fork, customize, and deploy their own nft marketplace app t

nightfury 6 Jun 6, 2022
Secure sandboxing system for untrusted code execution

Godbox Secure sandboxing system for untrusted code execution. It uses isolate which uses specific functionnalities of the Linux kernel, thus godbox no

Nathanael Demacon 19 Dec 14, 2022
A private network system that uses WireGuard under the hood.

innernet A private network system that uses WireGuard under the hood. See the announcement blog post for a longer-winded explanation. innernet is simi

Tonari, Inc 4.1k Jan 6, 2023
A cryptographically verifiable code review system for the cargo (Rust) package manager.

image credit cargo-crev A cryptographically verifiable code review system for the cargo (Rust) package manager. Introduction Crev is a language and ec

crev - Code REView system 1.8k Jan 5, 2023
Cover your tracks during Linux Exploitation by leaving zero traces on system logs and filesystem timestamps. 👻🐚

moonwalk Cover your tracks during Linux Exploitation / Penetration Testing by leaving zero traces on system logs and filesystem timestamps. ?? Table o

Mufeed VH 1.1k Jan 6, 2023
Cross-platform async library for system information fetching 🦀

heim Cross-platform library for system information fetching heim is an ongoing attempt to create the best tool for system information fetching (ex., C

null 782 Jan 2, 2023
Crate for calling NT System Calls easily

ntcall-rs Easily call NT System Calls from rust. All System Call ID’s are dumped at compile-time. To get started just import the function you would li

joshuа 7 Sep 14, 2022
cert_installer - a utility that adds a CA certificate to Android's System Trust Store

cert_installer is a utility that adds a CA certificate to Android's System Trust Store by overwriting the /system/etc/security/cacerts directory with a tmpfs mount. Changes made to the System Trust Store is not persistant across reboots.

Terry Chia 5 Apr 11, 2022
Rust implementation of the H3 geospatial indexing system.

h3o Rust implementation of the H3 geospatial indexing system. Design This is not a binding of the reference implementation, but a reimplementation fro

Hydronium Labs 196 Jan 31, 2023
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
A lightweight microkernel/IPC based operating system built with Rust which is not a clone of any existing operating system

Noble Operating System Noble is a lightweight microkernel and IPC based operating system built with Rust which is not a clone of any existing operatin

Revolution Xenon 3 Jan 10, 2022
Standalone analytics provider and realtime dashboard designed for self-hosting.

Stats Stats is a high-performance, standalone analytics provider designed for self-hosting, enabling the collection and viewing of event data from web

Udara Jay 120 Mar 23, 2024
The official kernel for Popcorn OS, and operating system designed for handheld devices.

About Popkern is the kernel for Popcorn OS, an operating system designed for handheld devices. As such, the kernel is (to be) optimised at all levels

Team Scena 3 Sep 19, 2021
Writing an OS in Rust, To Study Operating System and Computer System

Hun Os Writing an OS in Rust, To Study Operating System and Computer System Reference Os Written In Rust https://github.com/seonghun-dev/blog_os https

Jung Seonghun 2 Dec 15, 2022
Host These Things Please - a basic http server for hosting a folder fast and simply

http Host These Things Please - a basic HTTP server for hosting a folder fast and simply Selected features See the manpage for full list. Symlinks fol

thecoshman 367 Dec 23, 2022
A low-ish level tool for easily writing and hosting WASM based plugins.

A low-ish level tool for easily writing and hosting WASM based plugins. The goal of wasm_plugin is to make communicating across the host-plugin bounda

Alec Deason 62 Sep 20, 2022
A tool to aid in self-hosting. Expose local services on your computer, via a public IPv4 address.

innisfree A tool to aid in self-hosting. Expose local services on your computer, via a public IPv4 address. Why? Most of the data I maintain is local,

Conor Schaefer 7 Mar 19, 2022
A free file hosting server that focuses on speed, reliability and security.

Triox Next Generation cloud storage server that is secure, fast, and reliable. Why Triox? ☘️ Open Source - We strongly believe in collaboration and tr

Triox 81 Nov 16, 2022
Build your service-server fast, easy (and without hosting!)

service-io is a library to build servers that offering services with really little effort. Choose an input connector. Choose an output connector. Choo

Luis Enrique Muñoz Martín 34 Jan 4, 2023