One Binary.
Auth, DB, API.
Auth, database, realtime subscriptions, file storage, and server-side functions compiled into a single Go binary. SQLite locally, PostgreSQL in production. REST API auto-generated from your schema.
$ base serve
> Server started at http://127.0.0.1:8090
- REST API: /api/
- Realtime: /api/realtime
- Admin UI: /_/
- Auth: /api/collections/users/auth-with-password
$ curl localhost:8090/api/collections/tasks/records
{
"items": [...],
"totalItems": 42,
"totalPages": 5
}Database, auth, files, realtime, and functions in a ~15 MB binary with zero external dependencies.
Database
SQLite for local dev, PostgreSQL for production. REST API auto-generated from your schema with filtering, sorting, pagination, and relation expansion.
// Create a record
const record = await base.collection('tasks').create({
title: 'Ship it',
status: 'active',
assignee: userId
});
// Query with filters
const tasks = await base.collection('tasks').getList(1, 20, {
filter: 'status = "active" && assignee = "' + userId + '"',
sort: '-created',
expand: 'assignee'
});Realtime Subscriptions
Subscribe to collection changes over SSE. Each create, update, or delete event is broadcast with the full record payload. Client SDKs handle reconnection and deduplication.
// Subscribe to all changes in a collection
base.collection('messages').subscribe('*', (e) => {
console.log(e.action); // 'create' | 'update' | 'delete'
console.log(e.record); // the changed record
});
// Subscribe to a single record by ID
base.collection('tasks').subscribe(recordId, (e) => {
updateUI(e.record);
});Authentication
Built-in auth with email/password, OAuth2 providers, and Hanzo IAM integration. Per-collection API rules define read/write/create/delete access using filter expressions.
Email/Password
With email verification flow
OAuth2
Google, GitHub, Apple, 10+ others
Hanzo IAM
SSO via hanzo.id, SAML/OIDC
API Rules
Filter-based access per collection
File Storage
Attach files to any collection record. Local filesystem in dev mode, S3-compatible object storage in production. Generates thumbnails on read via query parameters.
// Upload a file
const formData = new FormData();
formData.append('document', file);
formData.append('title', 'Report Q4');
const record = await base.collection('documents')
.create(formData);
// Get file URL (with optional thumb params)
const url = base.files.getURL(record, record.document);Server-Side Functions
JavaScript hooks execute before or after any API operation. Define custom HTTP routes, cron jobs, and event-driven logic in the embedded JSVM runtime. No separate function deployment.
Hooks
onRecordCreate((e) => {
// Runs before record is saved
e.record.set('slug',
slugify(e.record.get('title'))
);
});Custom Routes
routerAdd("POST", "/webhook",
(e) => {
const body = e.requestBody();
// Process incoming payload
return e.json(200,
{ ok: true }
);
}
);Cron Jobs
cronAdd("daily cleanup",
"0 3 * * *",
() => {
// Runs at 03:00 UTC daily
const old = findRecords(
"logs", "created < -30d"
);
deleteRecords("logs", old);
}
);Client SDKs
JavaScript / TypeScript
import { BaseClient } from '@hanzoai/base'
const base = new BaseClient('http://localhost:8090')
// Auth
await base.collection('users')
.authWithPassword(email, pass)
// CRUD with type safety
const tasks = await base.collection('tasks')
.getFullList<Task>({
sort: '-created',
expand: 'assignee'
})Go
import "github.com/hanzoai/base"
app := base.New()
app.OnRecordCreate("tasks").
BindFunc(func(e *core.RecordEvent) error {
// Custom logic
return e.Next()
})
app.Start()Dart / Flutter
import 'package:hanzo_base/hanzo_base.dart';
final base = BaseClient('http://localhost:8090');
// Realtime subscription
base.collection('messages')
.subscribe('*', (e) {
print(e.action);
print(e.record);
});Dev to Prod
Local Development
Single binary, no dependencies. Embedded SQLite, starts in under 1 second.
$ base serve --dev # Auth, DB, REST API, Admin UI # all running at localhost:8090
Production
PostgreSQL backend, horizontal replicas, K8s-native. Same binary, different config.
# Kubernetes deployment
apiVersion: apps/v1
kind: Deployment
spec:
replicas: 3
template:
spec:
containers:
- name: base
image: ghcr.io/hanzoai/baseZAP Protocol
Zero-copy binary protocol for inter-service communication between Base instances. Uses Cap'n Proto RPC instead of JSON serialization.
- Cap'n Proto RPC for zero-copy message passing
- Database, KV, and gateway transport layers
- ML-KEM / ML-DSA post-quantum TLS
Admin Dashboard
Served at /_/. Embedded in the binary. No separate deployment.
- Visual schema editor for collections
- Record browser with inline editing
- User management and auth configuration
- API rules editor
- Request logs and system metrics
Quick Start
1. Install
# macOS brew install hanzoai/tap/base # Linux curl -fsSL https://base.hanzo.ai/install | sh # Docker docker pull ghcr.io/hanzoai/base
2. Run
# Start with dev mode base serve --dev # Admin UI: localhost:8090/_/ # API: localhost:8090/api/
3. Query
import { BaseClient } from '@hanzoai/base'
const base = new BaseClient(
'http://localhost:8090'
)
const records = await base
.collection('posts')
.getFullList()Pricing
Each Base instance runs on a dedicated compute node. Pricing syncs from pricing.hanzo.ai.
Starter
$0.007/hr
- 1 TB transfer
- 1 VM
- 1 vCPU
- 1 GB RAM
- 20 GB SSD
- 500 GB transfer
- Free $5 credit
Builder
$0.014/hr
- 1 TB transfer
- Up to 5 VMs
- 2 vCPU
- 2 GB RAM
- 40 GB SSD
- 1 TB transfer
Dev
$0.021/hr
- 1 TB transfer
- Up to 25 VMs
- 2 vCPU
- 8 GB RAM
- 25 GB SSD
- 3 TB transfer
Pro
$0.035/hr
- 1 TB transfer
- Up to 25 VMs
- 2 dedicated vCPU
- 8 GB RAM
- 80 GB SSD
- 2 TB transfer
Turbo
$0.054/hr
- 1 TB transfer
- Up to 25 VMs
- 4 vCPU
- 16 GB RAM
- 160 GB SSD
- 4 TB transfer
Turbo Dedicated
$0.068/hr
- 1 TB transfer
- Up to 25 VMs
- 4 dedicated vCPU
- 16 GB RAM
- 160 GB SSD
- 4 TB transfer
Business
$0.304/hr
- 1 TB transfer
- Up to 50 VMs
- 8 dedicated vCPU
- 32 GB RAM
- 240 GB SSD
- 20 TB transfer
Enterprise
$0.596/hr
- 1 TB transfer
- Up to 100 VMs
- 16 dedicated vCPU
- 64 GB RAM
- 360 GB SSD
- 40 TB transfer
Scale
$1.179/hr
- 1 TB transfer
- Up to 250 VMs
- 32 dedicated vCPU
- 128 GB RAM
- 600 GB SSD
- 50 TB transfer
Mega
$1.804/hr
- 1 TB transfer
- Up to 500 VMs
- 48 dedicated vCPU
- 192 GB RAM
- 960 GB SSD
- 60 TB transfer
Ultra
$5.554/hr
- 1 TB transfer
- Up to 1000 VMs
- 96 dedicated vCPU
- 384 GB RAM
- 1.9 TB SSD
- 120 TB transfer
Need More?
Custom clusters, bare-metal nodes, private networking, SLA guarantees, and dedicated support. Runs on your infrastructure or ours.
Start Building
Install the binary, define your collections, and ship. Open-source under MIT. Backed by Hanzo AI.