client.storage
S3-compatible object storage with per-agent key prefixing and signed URLs.
client.storage
S3-compatible object storage. Cloudflare R2 in production, MinIO in
stackbone dev. Every key is automatically prefixed with the agent's identity so two agents that pick the same logical bucket name never collide.
Mental model
client.storage.from(bucket) returns a StorageBucket scoped to a
logical bucket name. Internally every key is rewritten to
${agentId}/${bucket}/${key} before hitting the underlying physical
S3 bucket (S3_BUCKET). Path-traversal segments (..) are rejected
so a caller-controlled key cannot escape the agent's namespace.
The S3Client is built lazily on the first method call and
reused for the lifetime of the process. Required capability:
storage.s3. Every method that issues an S3 round-trip awaits the
contract handshake;
getPublicUrl is a pure URL builder and intentionally bypasses the
gate.
Configuration
| Source | Falls back to |
|---|---|
createClient({ s3.accessKeyId }) |
AWS_ACCESS_KEY_ID |
createClient({ s3.secretAccessKey }) |
AWS_SECRET_ACCESS_KEY |
createClient({ s3.endpoint }) |
S3_ENDPOINT |
createClient({ s3.bucket }) |
S3_BUCKET |
createClient({ agentId }) |
STACKBONE_AGENT_ID |
Missing any of these surfaces s3_credentials_missing,
s3_bucket_missing or agent_id_missing with an actionable hint.
Upload, download, list, remove
import { createClient } from '@stackbone/sdk';
const client = createClient();
const bucket = client.storage.from('uploads');
// Upload — body can be Blob | Uint8Array | string.
await bucket.upload('docs/welcome.txt', 'Hi there!', {
contentType: 'text/plain',
metadata: { authorId: 'user-123' },
});
// Download — materialises the object as a Blob in memory.
// For large objects, prefer getSignedDownloadUrl + fetch().
const downloaded = await bucket.download('docs/welcome.txt');
if (downloaded.error) throw new Error(downloaded.error.code);
const text = await downloaded.data.text();
// Paginated list, scoped by prefix.
const listed = await bucket.list({ prefix: 'docs/', limit: 50 });
if (listed.error) throw new Error(listed.error.code);
for (const obj of listed.data.objects) {
console.log(obj.key, obj.size, obj.lastModified);
}
// Remove.
await bucket.remove('docs/welcome.txt');Pagination is cursor-based — listed.data.nextCursor is set when
S3 truncated the response; pass it back as cursor in the next
call.
Public and signed URLs
// Pure URL builder. Whether the URL is publicly fetchable depends on
// the bucket policy. No S3 round-trip; the contract gate is skipped.
const publicUrl = bucket.getPublicUrl('docs/welcome.txt');For private buckets, mint short-lived signed URLs:
// Default TTL: 3600s (1h). Override with `expiresIn`.
const upload = await bucket.getSignedUploadUrl('uploads/raw.bin', {
expiresIn: 600,
contentType: 'application/octet-stream', // pinned into the signature
});
const download = await bucket.getSignedDownloadUrl('docs/welcome.txt', {
expiresIn: 300,
});contentType on getSignedUploadUrl is pinned into the signature,
so the client uploading to the URL must send a matching
Content-Type header — useful to enforce an image type from a
signed direct upload.
Errors
| Code | When |
|---|---|
s3_credentials_missing |
Access key, secret or endpoint absent. |
s3_bucket_missing |
S3_BUCKET (or s3.bucket override) absent. |
agent_id_missing |
STACKBONE_AGENT_ID (or agentId override) absent. |
s3_invalid_key |
Key contains a .. segment. |
s3_invalid_argument |
list({ limit }) ≤ 0. |
s3_empty_response |
S3 succeeded but returned no body. |
s3_error |
Anything else (auth, throttling, network). error.meta includes the AWS HTTP status, name, and fault when available; error.cause is the original SDK error. |
The contract gate adds contract_version_unsupported,
capability_unavailable, contract_unreachable and
contract_malformed — see
overview.
Where to go next
client.database— for typed Postgres alongside your blob storage.- Persistence tutorial — a
guided end-to-end build that exercises
client.databaseandclient.storagetogether. client.rag— a higher-level pipeline that ingests parsed text into Postgres +pgvector. Useclient.storagefor the raw uploads andclient.ragfor the indexable extract.