Caching
Baeta ships a caching library with support for multiple storage adapters. The cache offers declarative query definitions, automatic cache reconciliation on insert/update/delete, and type-safe cache operations.
Key Features
- Type-safe cache operations
- Declarative query definitions with
defineQuery - Automatic cache reconciliation on insert/update/delete
- Index-based query invalidation
- Custom serialization/deserialization
- Multiple storage adapters
- TTL support
Installation
- yarn
- npm
- pnpm
- bun
yarn add @baeta/cache
npm install @baeta/cache
pnpm add @baeta/cache
bun add @baeta/cache
Storage Adapters
Baeta supports several storage adapters for different use cases:
Redis (Recommended)
Best for production environments with high query volumes.
- yarn
- npm
- pnpm
- bun
yarn add @baeta/cache-ioredis ioredis
npm install @baeta/cache-ioredis ioredis
pnpm add @baeta/cache-ioredis ioredis
bun add @baeta/cache-ioredis ioredis
// src/lib/redis.ts
import { RedisCacheClient } from "@baeta/cache-ioredis";
import Redis from "ioredis";
const redis = new Redis("redis://localhost:6379");
export const redisClient = new RedisCacheClient(redis);
Upstash (Recommended for Serverless)
Optimized for serverless environments.
- yarn
- npm
- pnpm
- bun
yarn add @baeta/cache-upstash @upstash/redis
npm install @baeta/cache-upstash @upstash/redis
pnpm add @baeta/cache-upstash @upstash/redis
bun add @baeta/cache-upstash @upstash/redis
import { UpstashCacheClient } from "@baeta/cache-upstash";
const client = new UpstashCacheClient({
url: "UPSTASH_REDIS_URL",
token: "UPSTASH_REDIS_TOKEN",
});
Cloudflare
For Cloudflare Workers environments.
- yarn
- npm
- pnpm
- bun
yarn add @baeta/cache-cloudflare
npm install @baeta/cache-cloudflare
pnpm add @baeta/cache-cloudflare
bun add @baeta/cache-cloudflare
Basic Setup
1. Create a cache with queries
// src/modules/user/user.cache.ts
import { createCache, defineQuery } from "@baeta/cache";
import { redisClient } from "../../lib/redis.ts";
import { db } from "../../lib/db.ts";
const findUser = defineQuery({
resolve: async (args: { id?: string | null; email?: string | null }) => {
return db.user.findFirst({
where: {
id: args.id ?? undefined,
email: args.email ?? undefined,
},
});
},
});
const findUsers = defineQuery({
resolve: async (_args: {}) => {
return db.user.findMany();
},
});
export const userCache = createCache(redisClient, {
name: "UserCache",
// Custom serialization (optional)
parse: (value) => JSON.parse(value),
serialize: (value) => JSON.stringify(value),
})
.withQueries({
findUser,
findUsers,
})
.build();
Bump the revision option whenever the shape of cached values changes — it invalidates all existing entries for that cache.
Caching Examples
Basic Query Caching
Use .map() to route a field through a cached query. The mapper turns the resolver's parameters into the query arguments:
import { UserModule } from "./typedef.ts";
import { userCache } from "./user.cache.ts";
const { Query } = UserModule;
export default Query.$fields({
user: Query.user.map(({ args }) =>
userCache.queries.findUser({
id: args.where.id,
email: args.where.email,
}),
),
users: Query.users
.map(() => userCache.queries.findUsers({}))
.map(({ source }) => source ?? []),
});
Mutation Handling
Use update for existing items and insert for new items — the cache will automatically reconcile all related queries:
const createUserMutation = Mutation.createUser.resolve(async ({ args }) => {
const user = await db.user.create({ data: args.data });
// Use "insert" for new items, so cache queries can reconcile
await userCache.insert(user);
return user;
});
const updateUserMutation = Mutation.updateUser
.$use(async (next) => {
const user = await next();
if (user) {
// Use "update" for existing items — automatically updates all queries
await userCache.update(user);
}
return user;
})
.resolve(async ({ args }) => {
return db.user.update({
where: { id: args.where.id },
data: args.data,
});
});
Index-Based Query Invalidation
For relationship queries, use indexArgsBy to enable targeted invalidation. When items are inserted or deleted, only queries matching the relevant index values are invalidated:
import { createCache, defineQuery } from "@baeta/cache";
export const userPhotoCache = createCache(redisClient, {
name: "UserPhotoCache",
parse: (value) => JSON.parse(value),
serialize: (value) => JSON.stringify(value),
})
.withQueries({
findUserPhotos: defineQuery({
resolve: async (args: { userId: string }) => {
return db.userPhoto.findMany({
where: { userId: args.userId },
});
},
// Index queries by userId for targeted invalidation
indexArgsBy: {
userId: true,
},
// On insert, invalidate queries matching the new item's userId
onInsert(items, helpers) {
const args = items.map((item) => ({ userId: item.userId }));
return helpers.invalidateByArgs(args);
},
// On delete, invalidate queries matching the deleted item's userId
onDelete(pairs, helpers) {
const args = pairs
.map((item) => item.previous && { userId: item.previous.userId })
.filter((el) => el != null);
return helpers.invalidateByArgs(args);
},
}),
})
.build();
Then use it in resolvers:
const userPhotosResolver = User.photos
.map(({ source }) =>
userPhotoCache.queries.findUserPhotos({
userId: source.id,
}),
)
.withDefault([]);
Best Practices
- Choose the Right Adapter
- Use Redis for production environments
- Use Upstash for serverless applications
- Cache Invalidation Strategy
- Use
indexArgsByfor targeted invalidation of relationship queries - Implement
onInsert/onDeletehooks for automatic query invalidation - Use
insertfor new items andupdatefor existing items
- Performance Optimization
- Configure Redis to evacuate least used keys
- Use index-based invalidation over full query clearing when possible
For detailed examples, see the Baeta caching example.