Changelog

What's new in Creek.

Dashboard · api.creek.dev

Cache-aware Analytics: see the traffic the worker never knew about

  • -**Analytics tab now shows real visitor traffic — including the requests CF served from the edge cache.** A static SPA's index.html gets `cache-control: public, max-age=0, must-revalidate` from the Workers+Assets binding, and CF's edge fronts it with ETag-based revalidation: subsequent visits are answered at the edge without ever invoking the worker. Tail events only fire on worker invocations — by definition, cache hits are invisible to them. We were under-reporting traffic for every cached HTML response. Fixed by querying zone-level `httpRequestsAdaptiveGroups` from CF's GraphQL Analytics API alongside the existing Analytics Engine source. The Requests card is now the true number, and a Cache hit % card sits next to it so you can see how much of your traffic is being CDN-served vs hitting your worker.
  • -**Metrics and Analytics merged into a single Analytics tab.** Two tabs reading two data sources for overlapping numbers was confusing — same chart, slightly different totals depending on which side ran first. Six stat cards now live in one place: Requests (zone, all traffic), Cache hit (% served from edge), Invocations (worker runs), Error rate (errors / invocations — error rate quoted against invocations because cache hits physically can't error), CPU p50, CPU p99 (production deployment, from Workers GraphQL).
  • -**Stacked time-series chart.** When zone analytics is available, the requests-over-time chart stacks origin traffic on top and edge-cached traffic in a lighter shade — so you can see when a viral spike was actually CPU pressure on your worker vs just CDN passthrough. Falls back to the worker-invocations chart when zone data isn't available.
  • -**Period coverage expanded to 1h / 6h / 24h / 7d / 30d.** Both the AE-backed metrics endpoint and the Workers-GraphQL-backed analytics endpoint accept the same set, so switching periods recomputes everything coherently.
  • -**Per-deployment-type breakdown.** New visualizations for: HTTP method (GET / POST / …), deployment type (production / branch / preview), and status bucket (2xx / 4xx / 5xx). Driven by the Analytics Engine dataset the tail-worker has been writing since Phase 8. A noisy 4xx column is the fastest way to spot a broken redirect a user is hammering.
  • -**Logs tab note: edge-cached requests don't appear in logs.** This was the most-asked question — "why don't I see GET / in my logs?". Worker logs are by-definition worker invocations only; cached requests didn't run any code and have no `console.log` to show. Logs tab now says this explicitly with a deep link to the Analytics tab where total traffic (including cache hits) is visible.
  • -**Architectural alignment with CF's documented pattern.** Tail Workers are the right tool for invocation detail (CPU, exceptions, console output, sub-requests). Zone Analytics is the right tool for total request volume (cache + origin). We layer them, like CF's own dashboard does, instead of trying to make either one cover both jobs. Bonus: the layered model means tenants don't lose edge caching for the sake of observability — which would have been the most expensive false economy the platform could choose.
cli@0.4.13 · creek@0.4.13 · sdk@0.4.5

creek logs: per-tenant observability, live tail, WebSocket streaming

  • -**`creek logs` — read your project's logs from the CLI.** Every `console.log`, `console.error`, and uncaught exception in every request your Worker served is captured, per-tenant, and archived for 7 days. Query by time range, outcome, deployment preview, branch, console level, or free-text search. Pipe to `jq` with `--json`. First day it was live it surfaced a latent `TypeError: Cannot read properties of undefined (reading 'fetch')` in a tenant's SPA fallback that had been quietly 500-ing for hours — the command paid for itself within a minute.
  • -**`creek logs --follow` — live tail via WebSocket.** Prints recent historical context first so you don't stare at an empty terminal waiting for traffic, then streams new entries as they arrive (until Ctrl+C). Dedup is automatic across the historical → live boundary. Pair with `--outcome exception` to get a live error feed, or pipe `--json | jq` straight into a notification webhook.
  • -**Per-tenant isolation is structural, not role-gated.** The R2 key prefix is server-derived as `logs/{team}/{project}/...` from the authenticated session — a user cannot read another team's logs even by forging URL params. Tested end-to-end against the `vite-react-drizzle` sample. Sensitive request headers (Authorization, Cookie, Set-Cookie) are redacted at ingest and never stored.
  • -**Deploy path unification.** `deploySandbox` and `deployAuthenticated` used to be two parallel implementations that drifted — the sandbox path was silently missing worker bundling for anyone who declared `[build].worker` but hadn't signed in. Replaced with a single `prepareDeployBundle()` helper driving both flows, and a pure `planDeploy()` resolver (SDK) with 17 table-driven test rows covering every combination of (framework × worker entry × build output). 'SPA framework + custom Worker' (the vite-react-drizzle shape) now deploys correctly with zero config changes.
  • -**Pre-bundled Worker shortcut.** When `[build].worker` points at a `.js/.mjs/.cjs` file inside the build output, the CLI uploads the bytes verbatim instead of re-bundling and wrapping with the Creek runtime. Lets portable apps (like the new `vite-react-drizzle` example) keep their deployed Worker free of any `@solcreek/*` dependency.
  • -**New example: `vite-react-drizzle`.** Reference for the 'same code runs locally against `better-sqlite3` and on Cloudflare against D1' pattern. Three files carry runtime differences (`server/local.ts`, `server/worker.ts`, `server/routes.ts` — the last one driver-agnostic). Everything else — schema, routes, frontend — is portable. Ships with `local-dev.md`, `deploy-creek.md`, and `migrate-away.md` documenting both the Creek happy path and how to leave Creek for plain `wrangler deploy`.
  • -**Sandbox rate limit 5 → 15/hr** for unverified users. Five deploys per hour wasn't enough to debug a template through a few iterations.
  • -**CLI regression fix (0.4.7 → 0.4.8).** `cli@0.4.7` shipped with a stale workspace pin to `sdk@0.4.2` that didn't export helpers the CLI imported at startup. Both `creek@0.4.7` and `@solcreek/cli@0.4.7` have been deprecated on npm — `npm install` auto-suggests 0.4.8+.
cli@0.4.6 · creek@0.4.3 · sdk@0.4.2

Install size −85%, Astro support, --dry-run, static-site fast path

  • -**Install size cut from ~170MB to ~25MB** — `miniflare` (and its transitive deps `workerd` and `sharp`, which were ~146MB combined) is no longer a runtime dependency of `@solcreek/cli`. `creek deploy` never touched them anyway. First-time `npx creek deploy` now installs in ~3–5 seconds instead of ~20s, and the postinstall warnings that used to appear when `sharp`'s native build failed are gone. `creek dev` still uses miniflare but loads it on demand from the user's project / global npm / a creek-managed cache dir, with a zero-jargon error message if it's not installed.
  • -`creek deploy gh:owner/repo` shorthand — matches the `gh` CLI convention. `gh:`, `gl:` (GitLab), and `bb:` (Bitbucket) all work as short prefixes alongside the longer `github:` / `gitlab:` / `bitbucket:` forms.
  • -`creek deploy --dry-run` — new flag that reports a plan (resolved config, framework, build command, bindings, cron/queue, auth status, target type) without executing. No network calls, no file uploads, no ToS prompt. Pair with `--json` for agent-safe machine-readable output. Unsupported modes short-circuit cleanly.
  • -Astro framework detection — SDK now recognizes `astro` as a first-class framework. Any Astro 3+ project (SPA, SSG, MDX, content collections, sharp image optimization) auto-detects and builds via `astro build`. End-to-end tested against Astro 6 + Tailwind 4 + MDX + sharp (`jiseeeh/serene-ink` theme) — clone, install, build, 25 assets uploaded, live URL in ~15 seconds.
  • -Static-site sandbox fast path — a directory with only `index.html` (no `package.json`) now deploys cleanly. Previously crashed with ENOENT. The simplest possible onboarding works: `echo '<h1>Hi</h1>' > index.html && npx creek deploy`.
  • -Sandbox content scan allowlists common embed providers — YouTube, Vimeo, Wistia, Loom, Twitter/X, GitHub Gist, CodePen, CodeSandbox, StackBlitz, JSFiddle, Spotify, SoundCloud, Figma, Google Docs/Calendar/Maps, Typeform, Airtable, Notion. The previous regex-only check blocked every dev blog or portfolio that embedded a YouTube video; now the scan parses iframe `src` as a URL and allows known content-embed hosts (subdomain-aware) while still blocking unknown domains and phishing surfaces.
  • -Richer `creek deploy --help` output — `meta.description` and every flag description rewritten to spell out sandbox-vs-production behavior, auto-detection chain, and example invocations. Targets AI coding agents reading help on a cold paste so they can act confidently with fewer turns.
  • -`/llms.txt` rewritten with an explicit "Paste-to-agent contract" section describing what to do when a human pastes `npx creek deploy` with no surrounding context — what `--dry-run` returns, how to read exit codes, CLI conventions (`--json`, `--yes`, non-TTY auto-JSON), and the MCP server + agent-skill distribution channels.
  • -Agent discoverability caption: the hero under `npx creek deploy` now reads "no signup · live in seconds · Claude + Codex + Cursor ready". Empirical testing confirmed a cold Claude pasted with just `npx creek deploy` investigates safely (checks `--help`, scans config, asks for go/no-go) and completes the deploy in a few tool calls.
  • -Docs and breadcrumb scrub — every `--demo` reference across README, docs, pricing page, llms.txt, and internal error-path hints replaced with `creek deploy --template landing`, which actually works and gives users a real editable Vite + React starter (not a throwaway demo page).
cli@0.4.4 · creek@0.4.1

GitHub auto-deploy, remote build via connection

  • -`creek deploy --from-github [--project <slug>]` — trigger a remote build of the latest commit on the production branch via the project's GitHub connection, no local build required
  • -GitHub App integration: install the `Creek Deploy` App, connect a repository from the dashboard, and pushes to main auto-deploy while PRs get preview URLs with commit status
  • -Dashboard Project → Settings → GitHub Connection: show the connected repo, disconnect, or pick a new one via installation + repo picker
  • -Dashboard Project → Deployments: `Deploy latest` button (same endpoint as the CLI flag), inline commit SHA + message with a link to the GitHub diff, expanded error + failed-step display when a deploy fails, live polling so new deployments appear without a manual refresh
  • -Server: new `POST /github/deploy-latest` endpoint, auto-resolve slug collisions on import (appends `-2`, `-3`, … when two repos share a name), `github_connection.repoId` column to survive repository renames / transfers, webhook handlers for `repository.renamed` and `repository.transferred`
  • -CLI: auto-accept PKCS#1 private keys when creating the GitHub App JWT (GitHub's default export format) — no more manual `openssl pkcs8 -topk8`
  • -Control-plane: `GITHUB_WEBHOOK_SECRET` validation hardened, returns 503 instead of 500 when unconfigured
  • -Better Auth upgraded 1.5.6 → 1.6.2 — picks up OAuth CSRF + Drizzle date/null fixes that were affecting the dashboard login flow
  • -`CAPABILITIES.md` at the monorepo root: a one-page index of what Creek can do today and where each capability lives across runtime SDK, `creek.toml`, CLI, MCP, dashboard, and agent skills
  • -Agent skills bumped to v2.2 (installable via `npx skills add solcreek/skills`) — documents `--from-github`, trigger schemas, semantic resource names, and the GitHub auto-deploy walkthrough
runtime@0.4.0 · cli@0.4.2

Cron, queues, analytics, semantic resource names

  • -Cron triggers: declare `[triggers] cron = ["0 */6 * * *"]` in `creek.toml` and export a `scheduled()` handler — Creek wires the Cloudflare cron binding at deploy time
  • -Queue triggers: `[triggers] queue = true` provisions a per-project Cloudflare Queue, binds a `queue` producer at runtime (`import { queue } from 'creek'`), and invokes your exported `queue(batch, env, ctx)` consumer
  • -`creek queue send '<json-body>'` — inject a message into the project queue from the CLI
  • -`creek dev --trigger-cron "* * * * *"` — simulate a scheduled event firing during local development
  • -Per-tenant analytics tab in the dashboard: requests, errors, p50/p99 latency, plus a cron execution log pulled from the Cloudflare GraphQL Analytics API
  • -Semantic resource names in `creek.toml`: `database` / `storage` / `cache` / `ai` (the legacy `d1` / `kv` / `r2` / `ai` still work for backward compatibility)
  • -CLI worker-bundler now auto-generates `scheduled()` and `queue()` handler wrappers alongside `fetch()`
  • -`creek status` shows registered cron schedules and trigger config summary
  • -Request-scoped runtime bindings (D1, R2, KV, Queue) — multi-tenant safe by construction, no manual context threading
cli@0.3.9

First automated release + queue-driven web deploys

  • -First npm publish triggered by a git tag: `cli@0.3.9` via the new GitHub Actions `publish-cli.yml` workflow
  • -Companion publish workflows added for `@solcreek/sdk`, `@solcreek/runtime`, `@solcreek/ui`, `create-creek-app`, and the `creek` umbrella
  • -Remote-builder reworked as a Queue consumer — web-deploy builds now flow through a Cloudflare Queue producer in control-plane + consumer in remote-builder instead of `waitUntil` sharing request lifetime
  • -`creek ops` command for platform monitoring: list recent web deploys across the cluster, grouped by project
  • -Sandbox schema rename: the `sandbox` table becomes `deployments` with an `environment` column — one model for sandbox previews, web deploys, and authenticated production deployments
  • -`@solcreek/runtime` split into `/react` and `/hono` subpath exports so CF Workers builds don't pay the React cost when they don't need it
0.3.0-alpha.14

Query commands, MCP server, docs site

  • -`creek projects` / `creek deployments` / `creek status` — query your projects and deployments
  • -Remote MCP server at mcp.creek.dev — AI agents can deploy with a single tool call
  • -Documentation site at creek.dev/docs (powered by fumadocs)
  • -Bin aliases: `ck` and `crk` as shortcuts for `creek`
  • -llms.txt for AI agent discovery
  • -Distribution strategy and 21+ planning documents
0.3.0-alpha.12

Rate limit improvements, agent-first output

  • -Rate limit raised to 10/hr, demo deploys exempt
  • -429 responses include `hint` field guiding users to `creek login`
  • -Global `--json` and `--yes` flags on all 10 commands
  • -Non-TTY environments auto-enable JSON output
  • -Malformed JSON requests return 400 (not 500)
0.3.0-alpha.9

Sandbox UX redesign

  • -`creek deploy --demo` — zero-dependency instant deploy
  • -`creek deploy ./dist` — deploy any directory
  • -Empty directory shows helpful guidance instead of scaffold prompt
  • -Build output auto-detection (dist/, build/, out/)
  • -Removed interactive scaffold flow — Creek is a deploy platform, not a boilerplate generator
0.3.0-alpha.1

First public release

  • -CLI published to npm as `creek`
  • -@solcreek/sdk published
  • -Security audit: shell injection, config permissions, XSS, env redaction
  • -Apache 2.0 license
Internal

MVP + Sandbox + Production infrastructure

  • -Phase 0-2.5 completed: platform stabilization, auth, dashboard, production deploy
  • -Sandbox system: deploy, status, claim, delete, content scanning, WAF
  • -Banner with Shadow DOM, QR code, RWD
  • -OpenTofu managing 43+ Cloudflare resources
  • -creek.dev and app.creek.dev deployed via Creek (dogfooding)