Habitat is open source software that creates platform-independent build artifacts and provides built-in deployment and management capabilities.

Overview

Build Status Discourse status Open Source Helpers

Habitat is open source software that creates platform-independent build artifacts and provides built-in deployment and management capabilities.

The goal of Habitat is to allow you to automate your application behavior when you create your application, and then bundle your application with the automation it needs to behave with the correct run time behavior, update strategies, failure handling strategies, and scaling behavior, wherever you choose to deploy it.

See a quick demo of how to build, deploy and manage an application with Habitat:

Project State: Active

Issues Response Time Max: 5 business days

Pull Request Response Time Max: 5 business days

Build, Deploy and Manage with Habitat (5:33)

Table of Contents

Diagrams

Graphics that will help you and your team better understand the concepts and how they fit together into the larger Habitat ecosystem.

Where Habitat Fits

Habitat Flow Infographic

Try the interactive infographics on the website!

How Habitat Works

Habitat and Docker

View all diagrams in Docs

Training

View all demos and tutorials in Learn

Install

You can download Habitat from the Habitat downloads page.

Once you have downloaded it, follow the instructions on the page for your specific operating system.

If you are running macOS and use Homebrew, you can use our official Homebrew tap.

$ brew tap habitat-sh/habitat
$ brew install hab

If you are running Windows and use Chocolatey, you can install our chocolatey package

C:\> choco install habitat

If you do not run Homebrew or Chocolatey, or if you use Linux, you can use the Habitat install.sh or install.ps1 script.

Bash:

$ curl https://raw.githubusercontent.com/habitat-sh/habitat/master/components/hab/install.sh | sudo bash

Powershell:

C:\> Set-ExecutionPolicy Bypass -Scope Process -Force
C:\> iex ((New-Object System.Net.WebClient).DownloadString('https://raw.githubusercontent.com/habitat-sh/habitat/master/components/hab/install.ps1'))

Contribute

We are always looking for more opportunities for community involvement. Interested in contributing? Check out our CONTRIBUTING.md to get started!

Documentation

Get started with the Habitat tutorials or plunge into the complete documentation.

Code Organization

Core Plans

The Habitat plans that are built and maintained by Habitat's Core Team are in their own repo.

Habitat Supervisor and other core components

The code for the Habitat Supervisor and other core components are in the components directory.

Docs

Habitat's website and documentation source is located in the www directory of the Habitat source code. See its README for more information.

Roadmap

The Habitat project's roadmap is public and is on our community page.

The Habitat core team's project tracker is also public and on Github.

Community and support

Building

See BUILDING.md for platform specific info on building Habitat from source.

Further reference material

Code of Conduct

Participation in the Habitat community is governed by the code of conduct.

License

Copyright (c) 2016 Chef Software Inc. and/or applicable contributors

Licensed under the Apache License, Version 2.0 (the "License"); you may not use this file except in compliance with the License. You may obtain a copy of the License at

 http://www.apache.org/licenses/LICENSE-2.0

Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an "AS IS" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License.

Comments
  • Revised homepage / pricing / demo

    Revised homepage / pricing / demo

    opened by mgamini 77
  • Added Eng blog to habitat.sh site

    Added Eng blog to habitat.sh site

    Ok,

    It's not a totally massive addition, but I'm totally not a web developer. If anyone can pull this down and build it locally and check it out that would be good. I've verified its a reasonable experience.

    As there are no blog articles in this PR in order to verify how all of the things look with an actual blog post you'll need to add some markdown files to the www/source/blog directory.

    If there are any glaringly obscene things in my css or slim templates please let me know.

    opened by eeyun 36
  • Habitat services reporting sys IP as 127.0.0.1

    Habitat services reporting sys IP as 127.0.0.1

    Ran into a situation yesterday where in one of our gossip rings some services had started reporting their sys.ip as 127.0.0.1.

    For example:

                "ip": "127.0.0.1",
                "hostname": "<hostname>",
                "gossip_ip": "192.168.100.20",
                "gossip_port": 9638,
                "http_gateway_ip": "0.0.0.0",
                "http_gateway_port": 9631
              },
    

    This caused bound services to render the wrong IP in configurations/hooks, breaking interaction. In one case, a single node had both PostgreSQL and a backend API. PostgreSQL was showing the correct IP, while the backend API was showing 127.0.0.1.

    I was able to correct the issue by unloading and loading the effected services. This is a long running ring that has updates for services pushed into it via CI/Chef. Host OS is Centos 7.

    I have some debug logs I can send on request (not wanting to post directly here as they are live servers).

    Type:Blocked Focus:Supervisor Type: Bug 
    opened by wduncanfraser 35
  • WIP Documentation Refactor includes latest docs changes

    WIP Documentation Refactor includes latest docs changes

    One thing specifically needs to go into this branch before merge. The quicklink cards got boogered up somehow and I was unable to deduce what broke them.

    Ideally, we need to also add continuous scroll and a pinned sidebar to the docs but that can go in after the refactor.

    Signed-off-by: eeyun [email protected]

    opened by eeyun 32
  • hab 0.10.2 doesn't copy keys inside the studio

    hab 0.10.2 doesn't copy keys inside the studio

    • [x] The OS (including version) where you are running any of the Habitat commands.

      • Ubuntu 16.04 LTS, Linux 4.4.0-38-generic, x86_64
      • hab 0.10.2/20160930230245
    • [x] Debug/backtrace of the command you are trying to run. You can set the following environment variables before running the hab command to generate a trace:

      • https://gist.github.com/lilianmoraru/e242716d0e633fb9f6012caf329c4a72
      • After it loaded(you can scroll to the end for this) I did ls /hab/cache/ so you can see that there is no keys folder inside it. After that I exited and entered the studio again(less output there) and listed /hab/cache again.
    • [x] Current Habitat environment variables where the hab command or supervisor is running. These can be gathered using:

      • Does not print anyhing(no variables except for AUTH).
    • [x] If this is a key related issue, please include the list of files (including user/group permissions) in /hab/cache/keys and $HOME/.hab/cache/keys via ls -la.

      • ls -la /hab/cache/keys
      ls -la /hab/cache/keys
      total 32
      drwxr-xr-x 2 root root 4096 oct  8 09:41 .
      drwxr-xr-x 6 root root 4096 iul 23 10:34 ..
      -rw-r--r-- 1 root root   75 iul 17 15:20 core-20160612031944.pub
      -rw-r--r-- 1 root root   75 sep 15 00:54 core-20160810182414.pub
      -r-------- 1 root root   77 iul 23 10:30 lilian-20160723073001.pub
      -r-------- 1 root root  121 iul 23 10:30 lilian-20160723073001.sig.key
      -r-------- 1 root root   77 oct  8 09:41 lilian-20161008064147.pub
      -r-------- 1 root root  121 oct  8 09:41 lilian-20161008064147.sig.key
      
      • ls -la $HOME/.hab/cache.keys
      ls -la $HOME/.hab/cache/keys
      total 16
      drwxr-xr-x 2 root root 4096 oct  8 09:10 .
      drwxr-xr-x 3 root root 4096 oct  8 09:09 ..
      -rw-r--r-- 1 root root   75 oct  8 09:09 core-20160612031944.pub
      -rw-r--r-- 1 root root   75 oct  8 09:10 core-20160810182414.pub
      

    I generate the keys with sudo and start the studio with sudo. I have always used it that way, it doesn't work just now.

    opened by lilianmoraru 32
  • Learn Chef tutorial inaccurate with respect to auth tokens

    Learn Chef tutorial inaccurate with respect to auth tokens

    I am following the lesson track as described at https://learn.chef.io/modules/try-habitat#/

    I have created an origin named nick-oliver and all installation criteria are followed and all seem to be completing correctly, up to the point where I enter: curl -s 127.0.0.1 | grep 'The time is' The track says that there is supposed to be output, but when I run the curl command, I get no output.

    I then attempt to do the hab pkg upload, entering the correct .hart file name in the ./results directory. At this point I get the error: No auth token specified

    I am running CentOS Linux release 7.5.1804 (Core)

    RUST_LOG=debug RUST_BACKTRACE=1 hab
    DEBUG 2018-05-16T20:14:25Z: habitat_common::ui: InputStream(stdin): { is_a_terminal(): true }
    DEBUG 2018-05-16T20:14:25Z: habitat_common::ui: OutputStream(stdout): { is_colored(): true, supports_color(): true, is_a_terminal(): true }
    DEBUG 2018-05-16T20:14:25Z: habitat_common::ui: OutputStream(stderr): { is_colored(): true, supports_color(): true, is_a_terminal(): true }
    DEBUG 2018-05-16T20:14:25Z: hab: clap cli args: ["hab"]
    DEBUG 2018-05-16T20:14:25Z: hab: remaining cli args: []
    DEBUG 2018-05-16T20:14:25Z: hab::analytics: Event: v=1&tid=UA-6369228-7&cid=526351ec-186a-49f6-a6cd-1636b2bd0840&t=event&aip=1&an=hab&av=0.55.0%2F20180321220925&ds=cli--hab&ec=clierror&ea=MissingArgumentOrSubcommand--hab--
    DEBUG 2018-05-16T20:14:25Z: hab::analytics: Creating directory /home/nick.oliver/.hab/cache/analytics
    DEBUG 2018-05-16T20:14:25Z: hab::analytics: Creating file /home/nick.oliver/.hab/cache/analytics/event-1526501665.479903065.txt
    hab 0.55.0/20180321220925
    
    Authors: The Habitat Maintainers <[email protected]>
    -----------------------------------------------------
    env | grep HAB | grep -v HAB_AUTH_TOKEN
    HAB_ORIGIN=nick-oliver
    
    ~/.hab/cache/keys$ ls -lash
    total 8.0K
       0 drwxrwxr-x. 2 nick.oliver nick.oliver  84 May 16 14:39 .
       0 drwxrwxr-x. 6 nick.oliver nick.oliver  59 May 16 15:05 ..
    4.0K -r--------. 1 nick.oliver nick.oliver  82 May 16 14:39 nick-oliver-20180516193924.pub
    4.0K -r--------. 1 nick.oliver nick.oliver 126 May 16 14:39 nick-oliver-20180516193924.sig.key
    
    ls -lash ~/.hab
    total 4.0K
       0 drwxrwxr-x.  4 nick.oliver nick.oliver   28 May 16 14:39 .
    4.0K drwx------. 31 nick.oliver nick.oliver 4.0K May 16 15:03 ..
       0 drwxrwxr-x.  6 nick.oliver nick.oliver   59 May 16 15:05 cache
       0 drwxrwxr-x.  2 nick.oliver nick.oliver   21 May 16 14:38 etc
    

    I would appreciate any assistance you can provide.

    Stale Documentation 
    opened by nick-oliver 30
  • [studio] Create a `./results/` directory on build with artifact.

    [studio] Create a `./results/` directory on build with artifact.

    This change is designed to make a Studio even more temporary and ephemeral for build-related tasks. Now when performing a build using the hab-studio build subcommand, a directory local to your current working directory is created called ./results which will have a copy of the produces Habitat artifact on success.

    Local artifact copy

    A ./results/ directory is now created in the same directory you invoke hab-studio build or hab-studio enter from and contains a copy of the produced Habitat artifact file. Because the parent directory of ./results/ has been bind-mounted into the Studio's chroot, this means the the ./results/ directory will appear locally outside the Studio and will persist after a Studio has been torn down.

    For example:

    results
    |-- core-bzip2-1.0.6-20160517193810-x86_64-linux.hart
    |-- last_build.env
    `-- logs
        `-- bzip2.2016-05-17-193809.log
    

    Modifed build report

    Additionally a very build report (building on #509) is also written to the ./results directory as ./results/last_build.env. The intended user of this report is a build automation process, CI/CD environment, etc. The format is shell-compatible variable assignments of the form:

    <var>=<val>
    

    Where <var> is a variable and <val> is its value, delimited with an equals sign. The current contents of this build report are as follows:

    • pkg_origin - the artifact origin, such as core
    • pkg_name - the artifact name, such as bzip2
    • pkg_version - the artifact version, such as 1.0.6
    • pkg_release - the artifact release, such as 20160517193810
    • pkg_ident - the artifact's package identifier, such as core/bzip2/1.0.6/20160517193810

    For example:

    pkg_origin=core
    pkg_name=bzip2
    pkg_version=1.0.6
    pkg_release=20160517193810
    pkg_ident=core/bzip2/1.0.6/20160517193810
    pkg_artifact=core-bzip2-1.0.6-20160517193810-x86_64-linux.hart
    

    Note that the order of these variables will not be guarenteed, only that the variable names and their values will be present, one per line. Any tooling which consume this report should take measures to not assume any line ordering of the file's contents.

    Build logs with hab-studio build

    This change also puts a session recording-style log (captured via the script(1) program) in a ./results/logs subdirectory.

    Habitat plan.sh subdirectory default convention

    This change also introduces a happy-path directory naming convention if you choose to place your plan.sh and related files in a subdirectory under your project's root. In short, simply call the directory habitat/ and you're good to go.

    The hab-plan-build.sh program takes an argument as before which is a path to a directory containing a plan.sh file, however on failure, it will additionally check for habitat/plan.sh before giving up.

    An example might help to clarify. Let's assume we are creating Habitat packages for a Ruby application which exists in its own git repository (not important, but only that the directory we're sitting in contains the root of the application).

    A simple "flat" style of arranging the Habitat-related files is:

    .
    |-- config
    |   `-- env.rb
    |-- default.toml
    |-- hooks
    |   `-- run
    |-- lib
    |-- plan.sh
    `-- test
    

    A "nested" style of arranging the Habitat-related files is:

    .
    |-- habitat
    |   |-- config
    |   |   `-- env.rb
    |   |-- default.toml
    |   |-- hooks
    |   |   `-- run
    |   `-- plan.sh
    |-- lib
    `-- test
    

    Calling hab-studio build in the root of this project will produce identical results without having to specify the habitat/ path in the "nested" style.

    The same applies to non-application Plans such as software/library Plans. There is no need for hooks, config, etc in these style packages so most of the time it is simple enough to put the plan.sh in the root of that project. However, the "nested" style also works here too. To each, their own.

    It should also be noted that calling the build program inside a Studio (via hab-studio enter) also yields this behavior.

    Signed-off-by: Fletcher Nichol [email protected]

    opened by fnichol 27
  • Infrastructure sharing method needed

    Infrastructure sharing method needed

    Since the habitat repo has been split into habitat, builder, core, etc., we have some duplicated infrastructure. This has led to some issues such as the travis libsodium build being broken. Issues like this are likely to occur more in the future, so it would be helpful to find a sustainable way to share this code

    Aha! Link: https://chef.aha.io/features/APPDL-51

    Epic 
    opened by baumanj 25
  • Adds in blog post on using node scaffolding

    Adds in blog post on using node scaffolding

    with a node application with a MongoDB database.

    This uses the default mongo DB of test - debating whether I should add in info about creating your own DB.

    Signed-off-by: Nell Shamrell [email protected]

    opened by nellshamrell 25
  • initial metadata store implementation

    initial metadata store implementation

    This is the first pass at adding a metadata storage to the Depot server

    • Repo server stores metadata in lmdb, an embedded in-memory database
    • Added PackageIdent struct which encapsulates functionality which was sprinkled throughout modules in the codebase for working with package identifiers. This struct is passed as an argument to many functions which previously took either four &strs or two &strs and two Option<&str>s
    • Added DataStore struct which encapsulates the lmdb environment and all databases
    • Added PkgDatabase which is populated from reading the metadata files of package archives
    • Added PkgIndex which is a special database that allows sorted duplicate data to form an index. This database can be queried to find the latest version of a package using a cursor.
    • Added ViewDatabase which contains the names of repositories and the packages which are a part of them. This allows us to segregate particular packages to only being available in certain environments while not duplicating data on disk for the actual package archive.
    • Added data_store and data_object modules which contain an abstraction on top of the FFI layer of lmdb
    • Add new command line option depot-repair
    • Add new command line option repo-create
    • Add new command line option repo-list
    • Uploading a package which already exists on the depot will no longer result in a forced upload
    • Moved package archive specific functions to package::archive module from package

    Known Issues

    • Starting multiple transactions from different threads is not currently working. Additional child transactions can be started, but not root transactions. This behaviour should be possible within LMDB.
    • Cursor functions don't return useable values for keys, but the data of the current item is good to read and available
    opened by reset 25
  • [hab-sup] Exit 0 when given --help or --version

    [hab-sup] Exit 0 when given --help or --version

    Previously, hab-sup --version would exit with 86. Now, we exit 0 for both --version and --help.

    Note, however, we will still exit with an error if the help is displayed because of an error such as a bad subcommand or bad option.

    You may ask, "Do we need to print the error in the case of HelpDisplayed or VersionDisplayed?" In the case of HelpDisplayed the answer is yes because the help output isn't actually output to stdout but rather to a buffer that is contained in the error. In the case of VersionDisplay the answer is "kinda". The version is printed to stdout but without a newline, so printing the empty error prints a newline, which is a bit dirty but kept the function simple.

    opened by stevendanna 24
  • hab pkg [pro][de]mote doesn't report correct platform of package pro/demoted

    hab pkg [pro][de]mote doesn't report correct platform of package pro/demoted

    E.g.: When running on Linux, "hab pkg promote foo/bar/1.2.3/20220102030405", if successful, will always report "Promoted package foo/bar/1.2.3/20220102030405 (x86_64-linux)" ...regardless of whether the package is Linux or Windows.

    Converse is also true, when running on Windows, the platform reported is always x86_64-windows regardless of the package's actual platform.

    The CORRECT package is pro/demoted, this is a report-only issue.

    opened by mrysanek 0
  • Bump notify from 4.0.17 to 5.0.0

    Bump notify from 4.0.17 to 5.0.0

    Bumps notify from 4.0.17 to 5.0.0.

    Release notes

    Sourced from notify's releases.

    notify 5.0.0

    5.0.0 (2022-08-28)

    For a list of changes when upgrading from v4 see https://github.com/notify-rs/notify/blob/HEAD/UPGRADING_V4_TO_V5.md.

    Differences to 5.0.0-pre.16:

    • FIX: update minimum walkdir version to 2.2.2 #[432]
    • CHANGE: add need_rescan function to Event, allowing easier detection when a rescan is required #435
    • FIX: debouncer-mini: change crossbeam feature to crossbeam, to allow passthrough with notify re-exports #429
    • DOCS: improve v5-to-v5 upgrade docs #431
    • DOCS: file back v4 changelog into main #437
    • DOCS: cleanups and link fixes

    #431: notify-rs/notify#431 #432: notify-rs/notify#432 #437: notify-rs/notify#437 #435: notify-rs/notify#435 #429: notify-rs/notify#429

    Changelog

    Sourced from notify's changelog.

    notify 5.0.0 (2022-08-28)

    For a list of changes when upgrading from v4 see https://github.com/notify-rs/notify/blob/main/UPGRADING_V4_TO_V5.md.

    Differences to 5.0.0-pre.16:

    • FIX: update minimum walkdir version to 2.2.2 #[432]
    • CHANGE: add need_rescan function to Event, allowing easier detection when a rescan is required #435
    • FIX: debouncer-mini: change crossbeam feature to crossbeam, to allow passthrough with notify re-exports #429
    • DOCS: improve v5-to-v5 upgrade docs #431
    • DOCS: file back v4 changelog into main #437
    • DOCS: cleanups and link fixes

    #431: notify-rs/notify#431 #432: notify-rs/notify#432 #437: notify-rs/notify#437 #435: notify-rs/notify#435 #429: notify-rs/notify#429

    5.0.0-pre.16 (2022-08-12)

    • CHANGE: require config for watcher creation and unify config #426
    • CHANGE: fsevent: use RenameMode::Any for renaming events #371
    • FEATURE: re-add debouncer as new crate and fixup CI #286
    • FEATURE: allow disabling crossbeam-channel dependency #425
    • FIX: PollWatcher panic after delete-and-recreate #406
    • MISC: rework pollwatcher internally #409
    • DOCS: cleanup all docs towards v5 #395

    #395: notify-rs/notify#395 #406: notify-rs/notify#406 #409: notify-rs/notify#409 #425: notify-rs/notify#425 #286: notify-rs/notify#286 #426: notify-rs/notify#426 #371: notify-rs/notify#371

    5.0.0-pre.15 (2022-04-30)

    • CHANGE: raise MSRV to 1.56! #396 and #402
    • FEATURE: add support for pseudo filesystems like sysfs/procfs #396
    • FIX: Fix builds on (Free)BSD due to changes in kqueue fix release #399

    #396: notify-rs/notify#396 #399: notify-rs/notify#399 #402: notify-rs/notify#402

    5.0.0-pre.14 (2022-03-13)

    • CHANGE: upgrade mio to 0.8 #386

    ... (truncated)

    Commits
    • d985ae1 prepare 5.0.0
    • a83279f improve upgrade docs and fix Config link
    • 65da37b file back v4 history into changelog
    • 1fbf8fa add accessor for whether rescan is required on Event
    • 17580f6 fixup rebase gunk
    • 54465e9 fixup optional crossbeam feature selection in debouncer-mini
    • 94f1680 update minimum walkdir version to 2.2.2 (#432)
    • 4d16a54 fixup doc links post initial debouncer-mini release
    • d698f90 fixup moved readme due to reorg
    • 2f91e15 fix typo in readme
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies rust 
    opened by dependabot[bot] 3
  • Bump serde_yaml from 0.8.26 to 0.9.4

    Bump serde_yaml from 0.8.26 to 0.9.4

    Bumps serde_yaml from 0.8.26 to 0.9.4.

    Release notes

    Sourced from serde_yaml's releases.

    0.9.4

    • Add serde_yaml::with::singleton_map for serialization of enums as a 1-entry map (#300)
    • Reject duplicate keys when deserializing Mapping or Value (#301)

    0.9.3

    • Add categories to crates.io metadata
    • Add keywords to crates.io metadata

    0.9.2

    • Improve Debug representation of serde_yaml::Error

    0.9.1

    • Fix panic on some documents containing syntax error (#293)
    • Improve error messages that used to contain duplicative line/column information (#294)

    0.9.0

    API documentation: https://docs.rs/serde_yaml/0.9

    Highlights

    • The serde_yaml::Value enum gains a Tagged variant which represents the deserialization of YAML's !Tag syntax. Tagged scalars, sequences, and mappings are all supported.

    • An empty YAML input (or document containing only comments) will deserialize successfully to an empty map, empty sequence, or Serde struct as long as the struct has only optional fields. Previously this would error.

    • A new .apply_merge() method on Value implements YAML's << merge key convention.

    • The Debug representation of serde_yaml::Value has gotten vastly better (dtolnay/serde-yaml#287).

    • Deserialization of borrowed strings now works.

      #[derive(Deserialize, Debug)]
      struct Struct<'a> {
          borrowed: &'a str,
      }
      

      let yaml = "borrowed: 'kölcsönzött'\n"; let value: Struct = serde_yaml::from_str(yaml)?; println!("{:#?}", value);

    • Value's and Mapping's methods get and get_mut have been generalized to support a &str argument, as opposed to requiring you to allocate and construct a Value::String for indexing into another existing Value.

    • Mapping exposes more APIs that have become conventional on map data structures, such as .keys(), .values(), .into_keys(), .into_values(), .values_mut(), and .retain(|k, v| …).

    Breaking changes

    • Serialization no longer produces leading ---\n on the serialized output. You can prepend this yourself if your use case demands it.

    • Serialization of enum variants is now based on YAML's !Tag syntax, rather than JSON-style singleton maps.

    ... (truncated)

    Commits
    • d282c40 Release 0.9.4
    • 50f6ecd Merge pull request #301 from dtolnay/duplicate
    • ec1c1e4 Error on duplicate key when deserializing Mapping
    • 13837fd Delegate Value deserialization to Vec's and Mapping's impl
    • f3504fb Merge pull request #300 from dtolnay/with
    • f344c56 Add a singleton_map module for serde's 'with' attribute
    • 7e1b160 Pull in fixes from unsafe-libyaml 0.2.2
    • 3dceb15 Add test of serialize_key/serialize_value map
    • f7b55f1 Fix serialize_key of 1-entry maps
    • 5299f1e Derive Debug for emitter's Event
    • Additional commits viewable in compare view

    Dependabot compatibility score

    Dependabot will resolve any conflicts with this PR as long as you don't alter it yourself. You can also trigger a rebase manually by commenting @dependabot rebase.


    Dependabot commands and options

    You can trigger Dependabot actions by commenting on this PR:

    • @dependabot rebase will rebase this PR
    • @dependabot recreate will recreate this PR, overwriting any edits that have been made to it
    • @dependabot merge will merge this PR after your CI passes on it
    • @dependabot squash and merge will squash and merge this PR after your CI passes on it
    • @dependabot cancel merge will cancel a previously requested merge and block automerging
    • @dependabot reopen will reopen this PR if it is closed
    • @dependabot close will close this PR and stop Dependabot recreating it. You can achieve the same result by closing it manually
    • @dependabot ignore this major version will close this PR and stop Dependabot creating any more for this major version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this minor version will close this PR and stop Dependabot creating any more for this minor version (unless you reopen the PR or upgrade to it yourself)
    • @dependabot ignore this dependency will close this PR and stop Dependabot creating any more for this dependency (unless you reopen the PR or upgrade to it yourself)
    dependencies rust 
    opened by dependabot[bot] 2
  • pkg_source not respected and do_download() overwrite not seen

    pkg_source not respected and do_download() overwrite not seen

    Command that is running: hab pkg build . Setting pkg_source in a plan.sh file to either a git URL or a zipped file pkg_source="https://location/of/file.tar.bz2" and having do_download() overridden seems to not use the do_download() nor does it try to download the zipped file. The output is always:

    Cloning into 'repo'...
    Username for '[https://repo-location.com'](https://repo-location.com%27/): fatal: could not read Username for '[https://repo-location.com'](https://repo-location.com%27/): Success
    
    • The OS (including version) where you are running any of the Habitat commands.
    NAME="Amazon Linux"
    VERSION="2"
    ID="amzn"
    ID_LIKE="centos rhel fedora"
    VERSION_ID="2"
    PRETTY_NAME="Amazon Linux 2"
    ANSI_COLOR="0;33"
    CPE_NAME="cpe:2.3:o:amazon:amazon_linux:2"
    HOME_URL="https://amazonlinux.com/"
    
    • Debug/backtrace of the command you are trying to run. You can set the following environment variables before running the hab command to generate a trace:

    Running the hab command with the RUST env vars didn't product a stacktrace. I think maybe because it is just a failure to download from a git remote.

    $ hab --version
    hab 1.6.521/20220603154827
    
    • [ ] Current Habitat environment variables where the hab command or supervisor is running.
       hab-studio: Exported: HAB_AUTH_TOKEN=[redacted]
       hab-studio: Exported: HAB_ORIGIN=[redacted]
       hab-studio: Exported: HAB_BLDR_URL=https://bldr.biome.sh/
       hab-studio: Exported: AWS_DEFAULT_REGION=[redacted]
       hab-studio: Exported: AWS_SECRET_ACCESS_KEY=[redacted]
       hab-studio: Exported: HAB_STUDIO_HOST_ARCH=[redacted]
       hab-studio: Exported: AWS_ACCESS_KEY_ID=[redacted]
       hab-studio: Exported: GITLAB_USER=[redacted]
       hab-studio: Exported: GITLAB_TOKEN=[redacted]
    

    My goal was to have the hab pkg build . command pull from a remote private repo (using either the username:token option in the URL or the SSH clone option. It doesn't seem to matter what I put into the do_download() method or the pkg_source variable.

    opened by justincolangelo 6
  • Proposal: Native Packages, enabling Supervisor service support for platforms without packages.

    Proposal: Native Packages, enabling Supervisor service support for platforms without packages.

    Summary

    This is a proposal for introducing the idea of habitat package types. There will primarily be two types of packages:

    • Standard Packages : These are currently the only type of packages we have. A standard package is built in a cleanroom environment and it is guaranteed to run in any environment once it has been built. This is possible because every dependency of a standard package is also another standard package.
    • Native Packages: This is a new type of package. Native packages are built in the host environment that the habitat packaging tool gets run in. These packages transfer the responsibility of managing runtime dependencies to the user. A native package is essentially a thin wrapper around an existing application that makes use of hooks to define how to manage the application lifecycle.

    Motivation

    The Habitat Supervisor depends on .hart packages to define how a service must be managed through the use of hooks. As a result we have a hard dependency on the availability of habitat packages to be able to leverage the supervisor on a new platform. We would like to break this dependency to enable a separation between Habitat's packaging and Habitat's supervisor runtime capabilities. Users could make use of the Habitat Supervisor runtime without the need for a managed set of packaged dependencies for their desired platform (eg: ARM, Solaris, M1 Macs) shortening their on-boarding time.

    Design

    To enable support for native packages, the HAB_FEAT_NATIVE_PACKAGE_SUPPORT environment variable must be set to true.

    To create a Native Package you must:

    • Put # package_type: "native" as the topmost line of your plan.sh / plan.ps1 file.
    • You cannot have the following plan parameters:
      • pkg_deps
      • pkg_build_deps
      • pkg_interpreters
    • You must have a run hook defined for your package.
    • You can use any interpreter for your hooks. You must use a shebang line at the start of all your hook files with an appropriate path to the interpreter you wish to use.
    • There will be a new Metafile in a package called PKG_TYPE. It will contain native or standard to indicate the type of package.

    Behavior Changes

    • Invoking hab pkg build will automatically run the hab plan build script on the host system when it detects that the plan builds a native package.
    • Invoking build within a studio environment for a native package plan will result in an error message suggesting the user to run hab pkg build outside the studio.
    • Adding any unsupported plan parameter such as pkg_deps will result in a build time error message.
    • Native Package hooks will have to ensure that the interpreter they use and any commands that are used in them are available in the runtime environment.

    Design Issues Considered

    • Supporting dependencies between native packages is likely to be error prone as the user has to ensure that they always build and run the package in an environment that is compatible with all dependencies. Hence we will also not be supporting native packages that are libraries at this time.
    • We need to introduce a new concept here (ie: Package Type) in order to have a clean way to detect and prevent mixing up of packages meant to work in different environments.
    • There is no way to have both a native and a standard package plan file in the same folder, due to the existing rules of writing plan files.
    • We chose to use a magic comment in the header vs bash/ps variable because the latter would require us to parse and interpret the script to evaluate the package type. This is troublesome as we require to make a decision on whether to build the plan on the host system or within a studio environment prior to actually running the plan. Parsing a magic comment is a simpler and more reliable way to do this versus running the plan through an interpreter runtime for this purpose.

    Things yet to be decided

    • How will the builder APIs have to be modified to support this.
    opened by atrniv 6
Releases(1.6.607)
⚪️ `wasm-pack build` executed in remote deployment

rsw-node wasm-pack build executed in remote deployment, use with vite-plugin-rsw. Pre-installed rust nodejs wasm-pack: npm install -g wasm-pack vite-p

Len C... 10 Jul 6, 2022
An infrastructure-as-code and deployment tool for Roblox.

Rocat ?? An infrastructure-as-code and deployment tool for Roblox. ⚠ Please note that this is an early release and the API is unstable. Releases follo

Blake Mealey 45 Dec 29, 2022
An infrastructure-as-code and deployment tool for Roblox.

Mantle ?? An infrastructure-as-code and deployment tool for Roblox. ⚠ Please note that this is an early release and the API is unstable. Releases foll

Blake Mealey 44 Dec 22, 2022
GitLab Deploy is used for deploying software projects to multiple hosts during different phases.

GitLab Deploy is used for deploying software projects to multiple hosts during different phases. This program should be run on Linux.

Magic Len (Ron Li) 1 Nov 22, 2021
dedock is a container runtime, with a particular focus on enabling embedded software development across all platforms

dedock is a container runtime, with a particular focus on enabling embedded software development across all platforms. It supports native "containers" on both Linux and macOS.

Daniel Mangum 12 May 27, 2023
A buildpack for Rust applications on Heroku, with full support for Rustup, cargo and build caching.

Heroku buildpack for Rust This is a Heroku buildpack for Rust with support for cargo and rustup. Features include: Caching of builds between deploymen

Eric Kidd 502 Nov 7, 2022
Docker images for compiling static Rust binaries using musl-libc and musl-gcc, with static versions of useful C libraries. Supports openssl and diesel crates.

rust-musl-builder: Docker container for easily building static Rust binaries Source on GitHub Changelog UPDATED: Major updates in this release which m

Eric Kidd 1.3k Jan 1, 2023
Valheim Docker powered by Odin. The Valheim dedicated gameserver manager which is designed with resiliency in mind by providing automatic updates, world backup support, and a user friendly cli interface.

Valheim Docker If you are looking for a guide on how to get started click here Mod Support! It is supported to launch the server with BepInEx but!!!!!

Michael 657 Dec 30, 2022
oci-image and oci-runtime spec in rust.

oci-lib Oci-Spec for your container runtime or container registry. Oci-lib is a rust port for original oci spec written in go. Following crate contain

flouthoc 12 Mar 10, 2022
Desktop launcher to install and use Holochain apps locally

Holochain Launcher A cross-platform executable that launches a local Holochain conductor, and installs and opens apps. Feedback is immensely welcome i

Holochain 58 Dec 30, 2022
Tool to monitor the statistics and the energy consumption of docker containers

Docker Activity Docker activity is a tool to monitor the statistics of your containers and output their energy consumption. Warning It's still in earl

Jérémie Drouet 39 Dec 6, 2022
Rust Kubernetes client and controller runtime

kube-rs Rust client for Kubernetes in the style of a more generic client-go, a runtime abstraction inspired by controller-runtime, and a derive macro

kube-rs 1.8k Jan 8, 2023
Runc - CLI tool for spawning and running containers according to the OCI specification

runc Introduction runc is a CLI tool for spawning and running containers on Linux according to the OCI specification. Releases You can find official r

Open Container Initiative 9.9k Jan 5, 2023
Containerize your development and continuous integration environments. 🥂

Toast ?? Toast is a tool for doing work in containers. You define tasks in a YAML file called a toastfile, and Toast runs them in a containerized envi

Stephan Boyer 1.4k Dec 27, 2022
Inspect and dump OCI images.

reinlinsen ?? rl is a tool to inspect and dump OCI images or single image layers. Installation From source If you have cargo installed you can just ru

Tobias Brumhard 5 May 11, 2023
H2O Open Source Kubernetes operator and a command-line tool to ease deployment (and undeployment) of H2O open-source machine learning platform H2O-3 to Kubernetes.

H2O Kubernetes Repository with official tools to aid the deployment of H2O Machine Learning platform to Kubernetes. There are two essential tools to b

H2O.ai 16 Nov 12, 2022
"putzen" is German and means cleaning. It helps keeping your disk clean of build and dependency artifacts safely.

Putzen "putzen" is German and means cleaning. It helps keeping your disk clean of build and dependency artifacts safely. About In short, putzen solves

Sven Assmann 2 Jul 4, 2022
Cargo extension to recycle outdated build artifacts

cargo gc Cargo extension to recycle outdated build artifacts. And try the best to avoid recompilation. Usage Install it with cargo: cargo install carg

Ruihang Xia 23 Aug 30, 2023
ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-text search.

ReefDB ReefDB is a minimalistic, in-memory and on-disk database management system written in Rust, implementing basic SQL query capabilities and full-

Sacha Arbonel 75 Jun 12, 2023
Qovery Engine is an open-source abstraction layer library that turns easy apps deployment on AWS, GCP, Azure, and other Cloud providers in just a few minutes.

Qovery Engine is an open-source abstraction layer library that turns easy apps deployment on AWS, GCP, Azure, and other Cloud providers in just a few minutes.

Qovery 1.9k Jan 4, 2023