yeet
10 min read

The Advantage of Yeet

By: Necco Ceresani

Yeet is a single daemon that turns every Linux machine into a programmable, queryable node. It layers the kernel-depth visibility of eBPF with the accessibility of JavaScript, the structured queryability of GraphQL, and the reasoning capability of AI agents — giving any developer or AI system the ability to build purpose-built infrastructure tooling in minutes, without code changes, at the kernel level.

The closest analogy is Claude Code, but for infrastructure. Claude Code gives an AI agent structured access to a codebase: it can read files, understand project structure, navigate dependencies, and make changes. The developer provides intent, and the agent does the investigation and remediation work. Yeet does the same thing, but the "codebase" is your live, running infrastructure. The system graph is the structured read access. The eBPF lifecycle management is the ability to act. And unlike Claude Code, which operates on static artifacts (files on disk), Yeet operates on live, streaming reality.

That makes Yeet hard to categorize. Its domain spans a few crowded segments of the infrastructure industry: observability platforms like Datadog and Grafana, eBPF toolkits like bpftrace, and an ever-growing list of AI-for-ops startups. Most engineers would not say that observability platforms compete with eBPF toolkits, yet both categories are being reshaped by what Yeet does. Yeet sits at the intersection of some of the most important infrastructure trends of the last decade: eBPF maturation, AI-assisted development, and the shift toward programmable infrastructure.

If you were to do a feature-by-feature comparison of Yeet and the leading product in any one of these areas, Yeet might appear to be a modest competitor. It doesn't have Datadog's ecosystem of integrations. It doesn't have bpftrace's raw BPF expressiveness for kernel hackers. It doesn't have the mature dashboarding experience of Grafana. But Yeet's particular combination of architectural choices produces a whole that is significantly more powerful than the sum of its parts, spanning a range of infrastructure challenges in a way no single tool has before.


The Problem with Understanding Systems

It is hard — and often expensive — to understand what your systems are actually doing in production. Every application has different failure modes, different queries that matter, and different performance characteristics. Yet the industry's answer for the last fifteen years has been the same: ship your data to a vendor, look at their dashboards, and hope that their generic view of your system happens to surface the thing that matters to you.

This scales terribly. As companies grow from a handful of services to thousands of microservices across hundreds of machines, developers have to configure logging, tracing, and metrics for every service they ship. Operations teams need to build and maintain dashboards that somehow anticipate every failure mode in advance. These are generally good practices, but they create an enormous maintenance burden that grows faster than the infrastructure itself.

A tool that lets any developer — or any AI agent — build purpose-built instrumentation for their specific application, in minutes, without code changes, represents a fundamental shift.

If a developer could describe what they want to see and get a working tool that shows them their application's behavior at the kernel level, the time-to-insight would shrink from days to minutes. If operations teams could deploy purpose-built investigation tools against live incidents without pre-configuring anything, they could resolve problems before customers notice.


Design Philosophy

Yeet's design philosophy centers on three ideas that together define a new category of infrastructure tooling.

Every machine should be a programmable, queryable node

Not a black box that occasionally emits metrics. The yeet daemon models the entire system as a live graph of interrelated entities — processes, threads, containers, GPUs, network sockets, file descriptors, and hardware sensors — all interconnected and all queryable in real time through a typed GraphQL schema.

Instrumentation should be dynamic, not static

Instead of deciding in advance what to monitor, teams can deploy new eBPF probes and JavaScript transforms at runtime, targeting exactly the data they need for the problem at hand.

The data layer should be built for machines to reason about

Not just for humans to stare at. The system graph is typed, structured, and streaming — designed from the ground up so that AI agents can read, query, and act on live infrastructure at the kernel level.

This leads to an operations model that is not only more responsive but more intelligent. Purpose-built tools can be created as incidents demand. AI agents can observe the system graph, detect anomalies, correlate events across containers and kernel state, and in some cases remediate issues autonomously.


What You Get with Yeet

Bespoke observability tools, built in minutes

Every observability tool on the market gives you their dashboards, their metrics, their view of your system. Yeet builds your view — specific to your application, your services, your business logic. A Python profiler that parses your specific HTTP traffic. A GPU monitor that tracks your specific workload patterns. An agent that correlates your deploy cadence with your performance regressions.

These aren't pre-built features on a roadmap. They're tools you describe and generate, using JavaScript over the kernel, often with AI assistance. What used to require a dedicated observability engineering team can now be built in an afternoon.

Kernel-depth visibility without kernel-level expertise

eBPF is the most powerful instrumentation technology in Linux, but writing raw BPF — C code constrained by a strict kernel verifier — has historically been extremely difficult. Yeet puts a V8 JavaScript engine on top of the eBPF pipeline, so any developer or AI agent can write a JS script to dynamically inspect any memory address, trace any syscall, or profile any process.

The real mission of Yeet is to make BPF accessible. When people who barely know what a kernel is are using it to solve their problems, that mission is coming to fruition.

Deployment that takes minutes, not sprints

Getting started with Yeet is installing a single daemon. Run yeet install, and the host becomes a programmable, queryable node. There is no agent constellation to orchestrate, no backend to provision, no YAML labyrinth to navigate.

Once the daemon is running, you have three paths: use a script that ships out of the box, customize an existing script to fit your application, or write something entirely new. Each path takes minutes. The daemon manages everything else — eBPF lifecycle, system graph queries, V8 sandboxing, and event streaming — so the developer's only job is to describe what they want to see or do.

A data layer designed for AI agents

The system graph isn't just a clean API for developers. It's a tool-use interface purpose-built for LLM agent workflows. GraphQL is exactly the format that AI tool-calling works in: typed schema, structured queries, structured responses. An agent doesn't need to parse ps aux output and hope the columns didn't shift. It queries the graph and gets clean, typed data back. The schema itself is the documentation.

Combined with a shared-memory IPC ring buffer that delivers millions of kernel events per second, this gives AI agents something they've never had before: a continuous, structured perception of live infrastructure.

Alerting with context, not just noise

Most alerting today is a notification that something crossed a threshold, followed by a human scrambling to figure out what's actually happening. Yeet inverts that. When a PagerDuty alert fires, it can trigger a Yeet workflow that immediately snapshots the relevant data — CPU state, memory pressure, recent deploys, process trees, network conditions — and delivers a structured investigation summary to the engineer in Slack, arriving at roughly the same time as the page itself.

On-demand profiling across any language

Traditional profiling tools require dedicated setup, always-on agents, and vendor storage that accumulates costs whether anyone is looking at the data or not. Yeet takes a different approach: when you want to profile something, you run a script.

A full CPU profiler — BPF program loaded, perf events attached, symbolized stack traces streaming in real time — in about a hundred lines of JavaScript. When you're done, you stop the script. No residual overhead. No data accumulating in a backend.

Yeet's profiler works across all coding languages, compiled and interpreted, and it shows you exactly which lines of code are burning the most cycles. Not just which services are slow, but the forty lines buried in a forgotten module from two years ago where all the performance is actually being lost.

Near-zero overhead, zero lock-in

No code changes to your application. No logs to configure. No vendor storage. No always-on collection agents bleeding money when nothing is wrong. Yeet runs on your infrastructure, data never leaves your systems, and it costs near-nothing when idle. Deploy it as a systemd service. Remove it with apt remove.

Portability across kernel versions through BPF polyfilling

Deploying eBPF programs across a real fleet — different kernel versions, different architectures, different capabilities — has been one of the hardest unsolved problems in the ecosystem. Yeet solves this the same way the web solved browser compatibility: with a runtime that polyfills the gaps.

The V8 layer abstracts over kernel differences at the application level. Write your BPF program once, deploy it everywhere, and the runtime handles the rest.

Kernel-level security enforcement, built for your threat model

Every supply chain attack, every compromised CI/CD action, every rogue AI agent follows the same playbook: read credential files from known paths, exfiltrate secrets, escalate access. Yeet blocks that playbook at the syscall.

A single script can enforce which processes are allowed to read ~/.aws/credentials, ~/.ssh/*, or any sensitive path — denying the read before file contents ever reach memory. For edge traffic, Yeet can decode HTTP/2 headers in the kernel and make per-request blocking decisions at XDP speed, before packets reach your proxy.

These aren't pre-built rules from a vendor's threat database. They're policies you define, in JavaScript, specific to your infrastructure, your credential paths, your traffic patterns. The same programmability that makes Yeet powerful for observability makes it surgical for security.


Where Yeet Fits — and Where It Doesn't

Yeet's breadth of capabilities makes it important to understand what it replaces, what it complements, and where its boundaries lie. In many cases, Yeet fills gaps that existing tools cannot reach. In some cases, for teams with simpler needs, it can reduce or eliminate the need for dedicated tooling in a category entirely. But it is not trying to be a monolithic replacement for your entire infrastructure stack.

Full-Stack Observability Platforms (Datadog, New Relic, etc.)

Yeet doesn't replace your existing observability stack wholesale. It fills the gap your existing tools can't reach. Grafana and Datadog are strong for historical trends, shared dashboards, and compliance audit trails. Yeet provides what they cannot: programmable data transforms, edge-triggered rules engines, automated root cause analysis, autonomous remediation, deploy correlation, and kernel-level real-time data at sub-second latency with per-process, per-thread granularity.

Cloud Platforms (AWS, GCP, Azure)

Yeet is not a cloud platform. It runs on Linux machines that already exist in your infrastructure. It doesn't create instances, manage object stores, or provision block storage. Yeet gives you deep visibility into what's happening on the hosts you already have, without requiring you to ship data to a cloud backend.

APM and Dynamic Analysis Tools (Sentry, Dynatrace, etc.)

Yeet can perform dynamic code analysis in production — identifying hot paths, tracing execution across functions, and profiling live workloads — but it approaches the problem from the kernel up rather than from application instrumentation down. It doesn't require SDKs embedded in your application code or language-specific agents. Instead, it attaches eBPF probes to running processes and observes their behavior directly.

This means Yeet can analyze code performance without any code changes, but it doesn't provide the application-level context (stack traces with business logic annotations, user session correlation) that purpose-built APM tools offer. The two are complementary: APM tells you which user request was slow; Yeet tells you exactly which forty lines of code made it slow and whether the optimization is worth your time.

Configuration Management (Ansible, Puppet, Chef)

Yeet doesn't manage the ongoing configuration state of your hosts. It doesn't write config files or enforce system-level policies through declarative manifests. Yeet's system graph can observe configuration drift, and its agents can flag or even remediate specific issues, but the daemon is not a replacement for the tools that define what your infrastructure should look like at rest.

CI/CD Pipelines (GitHub Actions, Jenkins)

Yeet can correlate deploys with incidents, but it doesn't orchestrate your deployment pipeline. It integrates with deploy systems through installable components like @yeet/github to pull commit history, PR data, and changed files into agent context. It can even take action on what it finds: an uptime monitoring script, for instance, can identify HTTP errors caused by a bad deploy and submit a pull request to fix them automatically. But the pipeline itself remains your existing toolchain.

Alerting and Incident Management (PagerDuty, OpsGenie)

Yeet doesn't replace your paging infrastructure or incident management workflows. What it does is make them dramatically more useful. Instead of an alert that says "CPU is high," Yeet can trigger a workflow that investigates the problem the moment the alert fires, snapshots the relevant system state, and delivers a structured situation report alongside the page. The engineer who wakes up at 3am gets an investigation summary, not a bare threshold notification.

Security Platforms (Falco, CrowdStrike)

Yeet's eBPF foundation gives it powerful security capabilities that go well beyond passive detection: credential file protection that blocks unauthorized reads at the syscall level, L7 traffic inspection and blocking at XDP speed, process exec tracking, file integrity monitoring, network anomaly detection, and per-packet inspection hooks. For AI agent runtimes specifically, Yeet enforces what agent-generated code can and cannot access at the kernel — the only layer that code cannot bypass.

That said, Yeet is not a SIEM, and it doesn't ship with pre-built compliance frameworks or threat intelligence feeds. What it provides is the ability to build security tools that are purpose-fit for your specific threat model, your credential paths, your traffic patterns, rather than relying on a vendor's generic ruleset. Security platforms like CrowdStrike detect threats against a known database; Yeet enforces policies you define, at the kernel, against the specific attack surface of your infrastructure.

Log Management (Splunk, Elastic)

Yeet doesn't collect, store, or index logs. It deliberately avoids the log pipeline entirely. Instead of asking applications to emit logs and then building tooling to search them, Yeet observes the system directly at the kernel level. There are no logs to configure because the kernel already knows what every process is doing. That said, log management tools remain valuable for compliance, audit trails, and long-term forensics.

ML Operations Platforms (MLflow, Weights & Biases)

Yeet can monitor GPU utilization, detect memory leaks in training jobs, and identify stalled workloads before they cascade into cluster-wide failures. But it doesn't manage experiment tracking, model versioning, or training pipelines. It provides the infrastructure visibility layer that ML platforms typically lack.


Many of the underlying technologies Yeet builds on — eBPF, V8, GraphQL, shared-memory IPC — are not entirely new. But Yeet's particular combination of architectural and workflow choices is. It finally makes kernel-level instrumentation approachable to the average developer, and legible to the AI agents that are increasingly doing the work.

That's not an incremental improvement to observability. It's the foundation for infrastructure that understands itself.

Get early access to Yeet

Join the waitlist and be first to know when we launch.