Ship AI to production.
Not to a sandbox.
Ship4AI is the deployment platform for AI-native software — sites, apps, and agents on one runtime. Write TypeScript, push to a branch, get a live URL with logs, eval, and rollback. No canvases. No nodes. No nonsense.
npx create-ship4ai1import { app } from "ship4ai"2import { chat, rag, embed } from "ship4ai/ai"34export default app({5 name: "docs-copilot",6 model: "claude-opus-4.6",7 routes: {8 "/": site({ content: "./content" }),9 "/api/chat": chat({ retriever: rag("./docs") }),10 },11 domain: "docs.acme.dev",12})Sites. Apps. Agents. One runtime.
Three shapes of AI-native software, copy-paste ready. Every example is a working project, deployed on the same runtime you'll use.
1import { site } from "ship4ai"2import { semanticSearch, summarize } from "ship4ai/ai"34export default site({5 name: "acme-docs",6 content: "./content",7 features: {8 search: semanticSearch({ model: "openai/text-embed-3" }),9 summary: summarize({ model: "claude-opus-4.6" }),10 },11 domain: "docs.acme.dev",12})The runtime is the product.
We are not another LLM wrapper. Ship4AI gives you the eight primitives every production AI site, app, or agent eventually needs — built into the platform, not bolted on.
Preview deploys
Every git push gets a unique URL. Test your site, app, or agent in isolation before merging to main.
Observability built in
Every tool call, every token, every dollar — traced and searchable.
Secrets, scoped
Bring your own keys. Scope per env. Never logged.
Atomic rollbacks
Bad deploy? One command, gone. Previous version is one second away.
Eval on every push
Define golden cases once. Block deploys when scores regress.
Edge runtime
Cold starts under 50ms in 12 regions. Streaming on by default.
Trigger on anything
Webhooks, cron, email, queues. Wire your agent to the real world in two lines.
Stateful by default
Memory, sessions, embeddings — first-class across sites, apps, and agents. No vector DB to wire up.
Four steps. No meeting required.
The same workflow you already use for web apps — extended to AI. If you can ship a Next.js project, you can ship a production agent.
Write
TypeScript. Your editor, your stack. Pick site, app, or agent — the shape of the export is the only thing that changes.
Push
Git push to any branch. Preview URL in your PR within seconds.
Eval
Golden cases run on every push. Regress and the deploy is blocked.
Ship
Promote to prod with one command. Atomic. Reversible. Logged.
Built for engineers shipping AI to production
"The runtime is the product. We give you the primitives every AI site, app, and agent needs — not another LLM wrapper pretending to be infrastructure."
"Eval on every push. If golden cases regress, the deploy is blocked. That is the only way we trust shipping AI on Friday afternoons."
"Vercel showed the world what the frontend runtime could be. We are doing the same thing for AI. Push code, get a URL, watch it run."
Pay for invocations. Not seats.
Your team can grow without your bill exploding. We charge for the work the agents do, not the people who deploy them.
Hobby
For tinkering, weekend projects, the AI app that scratches your itch.
- 10k invocations / month
- 1 production project
- Community support
- Edge runtime
Pro
For builders shipping real products. Everything you need, no enterprise nonsense.
- 1M invocations / month
- Unlimited sites, apps & agents
- Eval suites + previews
- Custom domains
- Priority support
Scale
For teams shipping agents at the edge of what is possible. Talk to us.
- Unlimited everything
- Dedicated runtime
- SOC 2, HIPAA, BAA
- Bring your own cloud
- Slack channel + SLA
Stop demoing.
Start shipping.
Your competitors are still building flowcharts. You can have a real AI site, app, or agent running in production before lunch.
npx create-ship4ai my-app