v0.4.2 — sites, apps & agents on one runtime

Ship AI to production.
Not to a sandbox.

Ship4AI is the deployment platform for AI-native software — sites, apps, and agents on one runtime. Write TypeScript, push to a branch, get a live URL with logs, eval, and rollback. No canvases. No nodes. No nonsense.

Start shipping
npx create-ship4ai
47ms
Cold start
12 regions
Edge deploys
$0.0001
Per invocation
apps/docs-copilot.ts
ts
1import { app } from "ship4ai"
2import { chat, rag, embed } from "ship4ai/ai"
3
4export default app({
5 name: "docs-copilot",
6 model: "claude-opus-4.6",
7 routes: {
8 "/": site({ content: "./content" }),
9 "/api/chat": chat({ retriever: rag("./docs") }),
10 },
11 domain: "docs.acme.dev",
12})
live execution
iad1 · 218ms
Email received from elena@acme.dev
Classified: bug_report · confidence 0.94
Tool call createTicket()
LIN-2847 created · routed to @core-team
Slack #support notified
Done· 1.2s · 4 tool calls · $0.0034
Sites, apps & agents on one runtime12.4M invocations / 24hp99 latency 218ms0 cold starts on edge runtimeTypeScript-first — no canvas, no YAML$0.0001 / invocationStreaming on by defaultEval suite runs on every pushSites, apps & agents on one runtime12.4M invocations / 24hp99 latency 218ms0 cold starts on edge runtimeTypeScript-first — no canvas, no YAML$0.0001 / invocationStreaming on by defaultEval suite runs on every push
02 — Showcase

Sites. Apps. Agents. One runtime.

Three shapes of AI-native software, copy-paste ready. Every example is a working project, deployed on the same runtime you'll use.

agents/site.ts
ts
1import { site } from "ship4ai"
2import { semanticSearch, summarize } from "ship4ai/ai"
3
4export default site({
5 name: "acme-docs",
6 content: "./content",
7 features: {
8 search: semanticSearch({ model: "openai/text-embed-3" }),
9 summary: summarize({ model: "claude-opus-4.6" }),
10 },
11 domain: "docs.acme.dev",
12})
live execution
iad1 · 218ms
Build · 142 pages · 3,204 chunks
Embedded in 4.8s
Deployed · docs.acme.dev
Search p50 34ms
03 — Primitives

The runtime is the product.

We are not another LLM wrapper. Ship4AI gives you the eight primitives every production AI site, app, or agent eventually needs — built into the platform, not bolted on.

Preview deploys

Every git push gets a unique URL. Test your site, app, or agent in isolation before merging to main.

$git push → preview.ship4.ai/abc123

Observability built in

Every tool call, every token, every dollar — traced and searchable.

$ship4ai logs --tail

Secrets, scoped

Bring your own keys. Scope per env. Never logged.

$ship4ai env add OPENAI_KEY

Atomic rollbacks

Bad deploy? One command, gone. Previous version is one second away.

$ship4ai rollback

Eval on every push

Define golden cases once. Block deploys when scores regress.

$ship4ai eval --threshold 0.85

Edge runtime

Cold starts under 50ms in 12 regions. Streaming on by default.

$runtime: 'edge'

Trigger on anything

Webhooks, cron, email, queues. Wire your agent to the real world in two lines.

$on: { cron: '0 9 * * *' }

Stateful by default

Memory, sessions, embeddings — first-class across sites, apps, and agents. No vector DB to wire up.

$ctx.memory.recall(query)
04 — Workflow

Four steps. No meeting required.

The same workflow you already use for web apps — extended to AI. If you can ship a Next.js project, you can ship a production agent.

01

Write

TypeScript. Your editor, your stack. Pick site, app, or agent — the shape of the export is the only thing that changes.

import { site, app, agent }
from 'ship4ai'
 
export default app({...})
02

Push

Git push to any branch. Preview URL in your PR within seconds.

$ git push origin feat/triage
 
→ deploying preview...
03

Eval

Golden cases run on every push. Regress and the deploy is blocked.

$ ship4ai eval
 
✓ 47/48 passed · 0.91 avg
04

Ship

Promote to prod with one command. Atomic. Reversible. Logged.

$ ship4ai promote
 
→ live at agent.acme.com
Works with GitHub, GitLab, Bitbucket
CLI, dashboard, or REST — pick your poison
SOC 2 Type II, HIPAA-ready

Built for engineers shipping AI to production

12.4M
Invocations / 24h
218ms
p99 latency
47ms
Cold start
12
Edge regions
99.99%
Runtime uptime
0
DSLs to learn
SOC 2
Type II audited
TS
First-class
"The runtime is the product. We give you the primitives every AI site, app, and agent needs — not another LLM wrapper pretending to be infrastructure."
Engineering
Principle 01
"Eval on every push. If golden cases regress, the deploy is blocked. That is the only way we trust shipping AI on Friday afternoons."
Reliability
Principle 02
"Vercel showed the world what the frontend runtime could be. We are doing the same thing for AI. Push code, get a URL, watch it run."
Product
Principle 03
05 — Pricing

Pay for invocations. Not seats.

Your team can grow without your bill exploding. We charge for the work the agents do, not the people who deploy them.

Hobby

$0/forever

For tinkering, weekend projects, the AI app that scratches your itch.

Start free
  • 10k invocations / month
  • 1 production project
  • Community support
  • Edge runtime
Most popular

Pro

$24/month

For builders shipping real products. Everything you need, no enterprise nonsense.

Start shipping
  • 1M invocations / month
  • Unlimited sites, apps & agents
  • Eval suites + previews
  • Custom domains
  • Priority support

Scale

Custom

For teams shipping agents at the edge of what is possible. Talk to us.

Talk to sales
  • Unlimited everything
  • Dedicated runtime
  • SOC 2, HIPAA, BAA
  • Bring your own cloud
  • Slack channel + SLA
One runtime for AI sites, apps & agents

Stop demoing.
Start shipping.

Your competitors are still building flowcharts. You can have a real AI site, app, or agent running in production before lunch.

Deploy your first project
$npx create-ship4ai my-app