Skip to main content
You can build a robust, type-safe, and well-documented API using this guide. This guide will walk you through the entire process of setting up your Hono API, incorporating best practices and useful libraries, along with modern cloud services for databases, caching, feature flagging, and authentication. I am also proving the client (Aniket Nagla) a custom working backend repo for this project, please refer to that as well for getting better idea. That app is hosted here. Note: I use drizzle for the ORM here.

Prerequisites

Before you begin, make sure you have the following:
  • A GitHub account (for version control and deploying to platforms like Vercel or Cloudflare)
  • Node.js installed (with npm or pnpm)
  • VS Code (recommended IDE for TypeScript development)
  • A PlanetScale account (for MySQL, if chosen)
  • A Neon account (for PostgreSQL, if chosen)
  • An Upstash account (for Redis caching)
  • A Growthbook account (for feature flagging)
  • A Clerk account (for authentication)
  • A Vercel account (for Vercel deployment)
  • A Cloudflare account (for Cloudflare Workers deployment)
You’ll also need a custom domain if you plan to deploy your API to a production environment. In this guide, we’ll use api.example.com as a placeholder for your custom API domain.

Step 1: Local setup

First, you’ll need to initialize your Hono project and set up the development environment. This guide emphasizes a clean and organized approach from the start.
1

Initialize Hono project

Start by creating a new Hono project using its CLI tool. This typically involves setting up a source directory with an index.ts (sometimes like in my case its case, its app.ts)file for your main Hono application. You can specify a project name like “tasks API” and choose Node.js for deployment, but note Hono’s runtime-agnostic nature and I prefer Cloudflare Workers as Client has limited initial set budget.
Terminal
# You can use your preferred package manager, e.g., pnpm
pnpm create hono my-hono-app
cd my-hono-app
2

Install dependencies

Ensure all necessary dependencies are installed. This setup uses tsx for serving the application locally, and you’ll add others like Prisma and Drizzle later.
Terminal
pnpm i
3

Set up ESLint

Setting up ESLint is crucial for maintaining code quality and consistency. Install the necessary ESLint dependencies and configure your editor (e.g., VS Code) to use them.
Terminal
# Example: Install ESLint and related plugins
pnpm add -D eslint @typescript-eslint/eslint-plugin @typescript-eslint/parser eslint-config-prettier eslint-plugin-prettier
After installation, you may need to reload VS Code for the ESLint settings to take effect.
4

Set up environment variables

Create a .env file in your project root to store your environment variables. This file will hold sensitive information like API keys and database connection strings. You will fill in more variables in the following steps for your database, caching, and authentication services.
Terminal
# Example for a deployed app domain (if applicable)
NEXT_PUBLIC_APP_DOMAIN=api.example.com

Step 2: Configure Hono Application Structure and Core Utilities

Organize your Hono application for clarity and maintainability, especially when integrating with OpenAPI. This guide emphasizes a scalable approach where route definitions are separated from handlers, while maintaining full type safety.
1

Define application files

It is recommended to extract the core Hono and OpenAPI related code into a separate app.ts file, keeping it distinct from Node-specific configurations (like index.ts for local server setup). This makes your Hono app runtime-agnostic and flexible for deployment to different platforms like Vercel or Cloudflare Workers.
2

Implement Custom Not Found Handler

Utilize a custom notFound handler to gracefully manage 404 errors in your API. You can use a utility from a library like “Stoker” for this purpose. The source code for “Stoker” is available on GitHub if you prefer not to add another dependency.
// Example using Stoker's notFound utility (install 'stoker' if using)
import { Hono } from 'hono';
import { notFound } from 'stoker'; // Assuming 'stoker' is installed

const app = new Hono();

app.notFound(notFound());

export default app;
3

Implement Global Error Handler

An error handler is crucial to catch and format errors consistently across your API. This handler can format errors, including the message and stack trace (in non-production environments), and setting appropriate HTTP status codes.
// Example: Global error handler
import { Hono } from 'hono';
import { HTTPException } from 'hono/http-exception';

const app = new Hono();

app.onError((err, c) => {
  if (err instanceof HTTPException) {
    return err.getResponse();
  }
  console.error(`${err}`);
  return c.json({
    message: 'Internal Server Error',
    ...(process.env.NODE_ENV !== 'production' && { stack: err.stack }),
  }, 500);
});

export default app;
4

Configure Logger (hono-pino)

Set up hono-pino for robust logging. This includes configuring environment-based logging to enable pretty-printed logs in development and JSON logs in production. You can also integrate a custom Pino logger for unique request IDs per incoming request using the Crypto API, ensuring better log traceability. You’ll also need to set up custom TypeScript types to resolve type errors related to the logger.
// In your Hono app file (e.g., src/app.ts)
import { Hono } from 'hono';
import { logger } from 'hono-pino';
import { PinoLogger } from 'pino'; // Import PinoLogger type

// Define app bindings for type safety
interface AppBindings {
  Variables: {
    logger: PinoLogger;
  };
}

const app = new Hono<AppBindings>();

// Custom logger instance with request ID for traceability
app.use(logger({
  genReqId: (req) => {
    const requestId = req.headers.get('x-request-id') || crypto.randomUUID();
    return requestId;
  },
  // Configure pino for pretty printing in development
  ...(process.env.NODE_ENV !== 'production' && {
    transport: {
      target: 'pino-pretty',
      options: {
        colorize: true,
      },
    },
  }),
}));

// Your routes would go here, e.g.:
app.get('/', (c) => {
  // Access the logger from the context
  c.get('logger').info('Hello from Hono!');
  return c.json({ message: 'Hello Hono API!' });
});

export default app;
5

OpenAPI and Zod Integration

Leverage the power of OpenAPI as an ecosystem for interactive documentation and client-side library generation. Specifically, integrate with Zod OpenAPI middleware, which allows defining schemas with Zod, creating route contracts, and ensuring type safety in request handlers, all automatically converted into interactive documentation.
// Conceptual example of Zod OpenAPI integration
// You would typically define your Zod schemas and integrate them with Hono routes
// using a library like hono-zod-openapi or similar patterns.
// This allows for type-safe validation and automatic OpenAPI spec generation.
6

Drizzle and Zod for Database Type Safety

Emphasize using Drizzle with Zod for robust type safety from your database schema to your API. Drizzle Zod allows defining database tables and then directly creating Zod schemas from those definitions. These schemas can then be used in OpenAPI definitions, ensuring a single source of truth and type flow through to your Hono handlers.
// Conceptual example of Drizzle and Zod integration
// Define your Drizzle schema:
// import { pgTable, text, timestamp } from 'drizzle-orm/pg-core';
// export const users = pgTable('users', {
//   id: text('id').primaryKey(),
//   name: text('name').notNull(),
//   createdAt: timestamp('created_at').defaultNow(),
// });

// Generate Zod schema from Drizzle:
// import { createSelectSchema } from 'drizzle-zod';
// export const selectUserSchema = createSelectSchema(users);
// This `selectUserSchema` can then be used in your Hono route definitions for response validation.

Step 3: Set up Your Database (PlanetScale MySQL or Neon PostgreSQL)

You have the option to use either a MySQL database via PlanetScale or a PostgreSQL database via Neon for your application data. Both integrate well with Prisma or Drizzle.
PlanetScale (MySQL): PlanetScale recently removed their free tier, so you’ll need to pay for this option. A cheaper alternative is to use a MySQL database on Railway ($5/month).Local Development: For local development, it is recommended to use a local MySQL database with PlanetScale simulator (100% free) or a local PostgreSQL instance (e.g., using Docker or a local pg installation). For Cloudflare Workers, Turso is suggested for local database simulation.
1

Choose and Create Your Database

Option A: PlanetScale (MySQL) In your PlanetScale account, create a new database. Select Prisma as your ORM when prompted, or configure it for Drizzle later.Option B: Neon (PostgreSQL) In your Neon account, create a new project and then a new database. Neon provides a serverless PostgreSQL experience, which is highly scalable and cost-effective.
2

Set up Database Environment Variables

For PlanetScale: Create a new password for your database. Copy the DATABASE_URL from the “Add credentials to .env” section into your .env file.For Neon: Navigate to the “Connection Details” for your database and copy the “Direct Connection String”. Paste this as your DATABASE_URL in your .env file.
Terminal
DATABASE_URL="your_database_connection_string"
3

Generate ORM client and create database tables

First, install your chosen ORM (Prisma or Drizzle) in your project:
Terminal
# For Prisma
pnpm add prisma @prisma/client
pnpm prisma init

# For Drizzle (example for PostgreSQL)
pnpm add drizzle-orm pg
pnpm add -D drizzle-kit
Update your ORM schema file (e.g., prisma/schema.prisma or your Drizzle schema file) to define your data models.Then, run the following command to generate the ORM client based on your schema:
Terminal
# For Prisma
pnpm prisma generate

# For Drizzle
pnpm drizzle-kit generate:pg # or generate:mysql
Next, push your schema to create the database tables:
Terminal
# For Prisma
pnpm prisma db push

# For Drizzle (run migrations)
# You'll need to write a script to run Drizzle migrations, e.g.:
# node src/db/migrate.ts

Step 4: Set up Upstash Redis database

Next, you’ll need to set up the Upstash Redis database. This can be used for caching, rate limiting, session management, or other real-time data needs in your Hono API. Upstash provides a serverless Redis solution.
1

Create Upstash database

In your Upstash account, create a new Redis database.For better performance & read times, it is recommended to set up a global database with several read regions if your application is geographically distributed.
2

Set up Upstash environment variables

Once your database is created, copy the UPSTASH_REDIS_REST_URL and UPSTASH_REDIS_REST_TOKEN from the REST API section into your .env file.
Terminal
UPSTASH_REDIS_REST_URL="your_upstash_redis_rest_url"
UPSTASH_REDIS_REST_TOKEN="your_upstash_redis_rest_token"

Step 5: Set up Clerk for Authentication

Clerk provides a complete user management solution, including authentication, user profiles, and backend verification. It simplifies implementing robust authentication flows.
1

Create Clerk Application

Sign up for a Clerk account and create a new application. Configure your desired authentication strategies (e.g., Email, Google, GitHub, etc.) within the Clerk dashboard.
2

Configure Callback URLs

In your Clerk application settings, ensure you set the correct redirect URLs for your development and production environments. These are crucial for the OAuth flow to work correctly.
  • http://localhost:8888/ (or your local Hono API URL)
  • https://api.example.com/ (or your deployed Hono API URL)
  • You might also need to configure specific callback paths for OAuth providers if you enable them within Clerk (e.g., https://api.example.com/api/auth/callback/github).
3

Set up Clerk Environment Variables

From your Clerk dashboard, navigate to “API Keys” and copy your Publishable Key and Secret Key. Add them to your .env file. The Secret Key is essential for backend verification.
Terminal
CLERK_PUBLISHABLE_KEY="your_clerk_publishable_key"
CLERK_SECRET_KEY="your_clerk_secret_key"
4

Integrate Clerk with Hono

Install Clerk’s SDK for your backend:
Terminal
pnpm add @clerk/clerk-sdk-node
Use Clerk’s middleware or utilities in your Hono routes to protect API endpoints and retrieve authenticated user information. Clerk automatically handles JWT verification and user session management on the backend, ensuring that only authenticated requests can access protected resources.
// Example: Protecting a Hono route with Clerk
import { Hono } from 'hono';
import { ClerkExpressRequireAuth } from '@clerk/clerk-sdk-node'; // Or a custom Hono-compatible middleware

const app = new Hono();

// This is a conceptual example. You might need to adapt Clerk's Node.js SDK
// to work seamlessly with Hono's middleware system.
// A common pattern is to create a Hono middleware that wraps Clerk's verification logic.
const clerkAuthMiddleware = async (c, next) => {
  try {
    // Clerk's SDK would handle token verification from headers
    // and attach user information to the request context.
    // For example, using `ClerkExpressRequireAuth` if adapted for Hono.
    // Or manually verifying the session token:
    // const sessionToken = c.req.header('Authorization')?.split(' ')[1];
    // const client = new ClerkClient({ secretKey: process.env.CLERK_SECRET_KEY });
    // const session = await client.verifyToken(sessionToken);
    // if (!session) throw new Error('Unauthorized');
    // c.set('userId', session.userId); // Attach user ID to Hono context

    // For simplicity, let's assume a basic check for now
    const userId = c.req.header('X-Clerk-User-Id'); // Example for a pre-verified token from a frontend
    if (!userId) {
      return c.json({ error: 'Unauthorized: Missing user ID' }, 401);
    }
    c.set('userId', userId); // Set user ID in Hono context
    await next();
  } catch (error) {
    console.error('Clerk authentication error:', error);
    return c.json({ error: 'Unauthorized: Invalid token' }, 401);
  }
};

app.get('/protected', clerkAuthMiddleware, (c) => {
  const userId = c.get('userId');
  return c.json({ message: `Hello, authenticated user ${userId}!` });
});

export default app;

Step 6: Set up Growthbook for Feature Flagging

Growthbook allows you to implement feature flags and A/B tests in your application, enabling dynamic feature control without redeploying your code.
1

Create Growthbook Account and Project

Sign up for a Growthbook account and create a new project. This will be your central dashboard for managing features.
2

Get API Key

In your Growthbook project, navigate to “SDK Connections” or “API Keys” to obtain your Growthbook API Key (Client Key for client-side SDK, Secret Key for server-side SDK). For a Hono backend, you’ll likely use the server-side SDK, which requires the Secret Key.Add this key to your .env file:
Terminal
GROWTHBOOK_API_KEY="your_growthbook_api_key"
3

Integrate Growthbook with Hono

Install the Growthbook SDK for Node.js:
Terminal
pnpm add @growthbook/growthbook
Initialize Growthbook in your Hono application and use it to check feature flags in your API logic. It’s good practice to load features once at application start and then use the instance in your requests.
// Example: Using Growthbook in Hono
import { Hono } from 'hono';
import { GrowthBook } from '@growthbook/growthbook';

const app = new Hono();

// Initialize Growthbook at application start
let gb: GrowthBook;

app.use(async (c, next) => {
  if (!gb) {
    gb = new GrowthBook({
      apiHost: '[https://cdn.growthbook.io](https://cdn.growthbook.io)', // Or your self-hosted instance
      clientKey: process.env.GROWTHBOOK_API_KEY, // Use clientKey for client-side, or secretKey for server-side
      enableDevMode: process.env.NODE_ENV !== 'production',
    });
    await gb.loadFeatures(); // Load features from Growthbook API
  }
  // You can pass the GrowthBook instance down through the context for easy access in handlers
  c.set('growthbook', gb);
  await next();
});

app.get('/feature-test', (c) => {
  const gbInstance = c.get('growthbook');
  if (gbInstance.isOn('new-api-endpoint')) {
    return c.json({ message: 'New API endpoint enabled by feature flag!' });
  } else {
    return c.json({ message: 'Old API endpoint is active (feature flag disabled).' });
  }
});

export default app;

Step 7: Deploy to Vercel

Once you’ve set up your Hono application and all integrated services, you can now deploy your API to Vercel. Vercel is well-suited for Node.js-based applications.
1

Deploy code to GitHub

If you haven’t already, push your Hono repository to GitHub by running the following commands:
Terminal
git add .
git commit -m "Initial Hono API commit with services setup"
git push origin main
2

Create a new Vercel project

In your Vercel account, create a new project. Then, select your GitHub repository and click Import.Make sure that your Framework Preset is set appropriately (e.g., Other if Hono is not directly listed, and configure build commands and output directory as needed for your Hono setup). Note that some configurations might be needed for path aliases on Vercel without bundling, so you might need to adjust your vercel.json or build process to ensure your Hono app builds correctly for the Vercel Edge Runtime or Node.js Runtime.In the Environment Variables section, add all of the environment variables from your .env file.Click on Deploy to deploy your project.
3

Add required environment variables and domains

Once the project deploys, retrieve your Vercel project ID and add it as the PROJECT_ID_VERCEL environment variable – both in your .env file and in your newly created Vercel project’s settings (under Settings > Environment Variables).Add your API domain (e.g., api.example.com) as a domain in your Vercel project’s settings (under Settings > Domains). You can follow this guide to learn how to set up a custom domain on Vercel.
4

Redeploy your Vercel project

Go back to the Deployments page and redeploy your project.Once the deployment is complete, you should be able to visit your API domain (e.g., https://api.example.com) and interact with your deployed Hono API, complete with database integration, caching, feature flags, and robust authentication.

Step 8: Deploy to Cloudflare Workers (Alternative to Vercel)

Hono was initially built with Cloudflare Workers in mind, making it an excellent choice for highly performant and scalable serverless functions. The setup differs slightly, particularly in how environment variables are handled.
1

Clone Repository and Install Wrangler

Start by cloning your Hono API repository. You’ll need to install Wrangler, Cloudflare’s CLI tool, as a development dependency for local testing and deployment.
Terminal
git clone [https://github.com/your-repo/my-hono-app.git](https://github.com/your-repo/my-hono-app.git) cloudflare_example
cd cloudflare_example
pnpm add -D wrangler @cloudflare/workers-types
2

Configure Wrangler (`wrangler.toml`)

Create a wrangler.toml file in your project root for Cloudflare Workers configuration. Set the main entry point to your Hono application file (e.g., src/app.ts), which exports the Hono app itself, not a Node.js server.
wrangler.toml
name = "my-hono-api"
main = "src/app.ts" # Your Hono app entry point
compatibility_date = "2025-06-29" # Use the current or a recent date
workers_dev = true

# Example for environment variables (secrets are added via CLI/dashboard)
# [vars]
# MY_VARIABLE = "value"
Note: The compatibility_date should be set to the current date or a recent date to ensure you’re using the latest Workers features.
3

Update Package.json Scripts

Modify your package.json scripts to include dev and deploy commands suitable for Cloudflare Workers using Wrangler.
package.json
{
  "scripts": {
    "dev": "wrangler dev",
    "deploy": "wrangler deploy",
    "build": "tsc",
    "start": "node dist/index.js" # For local Node.js server if needed
  }
}
4

Handle Environment Variables for Cloudflare Workers

A key challenge with Cloudflare Workers is that environment variables and secrets are only accessible within the fetch event handler, not through process.env. You’ll need to refactor your environment variable handling.
  1. Define an Environment type: Rename your Env type to Environment (or similar) to align with Hono’s built-in types for c.env.
  2. Create a parseEnv function: Export a parseEnv function that accepts an object (your c.env from the request context), validates it with Zod (or similar), and returns the validated environment variables. This function should throw an error if validation fails.
  3. Add a middleware for validation: Implement a Hono middleware that runs first, validates c.env using your parseEnv function, and ensures environment variables are correctly typed and accessible within request handlers.
  4. Update type bindings: Ensure c.env is correctly typed as Environment in your app.d.ts or similar type definition file.
  5. runtime-env.ts for local tools: Create a runtime-env.ts file to provide a way to access process.env for local tools (like Drizzle Kit) that run outside the Cloudflare Worker context. This file can export a default parsed process.env.
// src/types.d.ts (or similar)
import { Env } from 'hono'; // Hono's base Env type
import { z } from 'zod'; // Assuming Zod for validation

// Define your expected environment variables schema
export const EnvSchema = z.object({
  DATABASE_URL: z.string().url(),
  CLERK_SECRET_KEY: z.string(),
  // ... other env variables
});

// Extend Hono's Env to include your validated environment variables
declare module 'hono' {
  interface Env extends z.infer<typeof EnvSchema> {}
}

// src/utils/env.ts (or similar)
import { EnvSchema } from '../types';

export const parseEnv = (env: unknown) => {
  return EnvSchema.parse(env);
};

// src/app.ts (Hono app)
import { Hono } from 'hono';
import { parseEnv } from './utils/env';
import { EnvSchema } from './types'; // Import EnvSchema for type inference

const app = new Hono<{ Bindings: z.infer<typeof EnvSchema> }>();

// Middleware to parse and validate environment variables from c.env
app.use(async (c, next) => {
  try {
    const parsedEnv = parseEnv(c.env);
    // Hono automatically merges c.env with Bindings, so direct assignment might not be needed
    // if your EnvSchema matches the Bindings type.
    // For explicit access or if you need to transform, you can do:
    // c.set('validatedEnv', parsedEnv);
    await next();
  } catch (error) {
    console.error('Environment variable validation error:', error);
    return c.json({ error: 'Server configuration error' }, 500);
  }
});

export default app;
5

Refactor Database Connection for Workers

Since database clients and connections need to be created on every incoming request in a serverless environment like Cloudflare Workers, refactor your database setup.
  1. Export a createDB function from your database module (e.g., src/db/index.ts) that takes the Environment object (or your parsed environment variables) and returns the database client and instances.
  2. Call this createDB function within each request handler where database access is needed, ensuring the connection uses the environment variables available in the request context.
// src/db/index.ts (conceptual)
import { drizzle } from 'drizzle-orm/libsql'; // For Turso/SQLite
import { createClient } from '@libsql/client'; // For Turso

export const createDB = (env: { DATABASE_URL: string; DATABASE_AUTH_TOKEN?: string }) => {
  const client = createClient({
    url: env.DATABASE_URL,
    authToken: env.DATABASE_AUTH_TOKEN,
  });
  const db = drizzle(client);
  return { client, db };
};

// src/routes/tasks/handlers.ts (conceptual Hono handler)
import { Hono } from 'hono';
import { createDB } from '../../db'; // Adjust path

const app = new Hono();

app.get('/tasks', async (c) => {
  const { db } = createDB(c.env); // Pass c.env to createDB
  // const tasks = await db.select().from(schema.tasks).execute();
  return c.json({ message: 'Tasks retrieved (using Cloudflare DB setup)' });
});
6

Local Development with Wrangler and Turso

To simulate Cloudflare’s environment locally and have a local database, it is suggested to use Wrangler with Turso.
  1. .dev.vars file: Create a .dev.vars file (by copying your .env) to store local environment variables for Wrangler. Add this file to your .gitignore.
  2. Install Turso CLI: pnpm add -D turso
  3. dev:db script: Add a dev:db script to package.json to run turso dev for a local database instance, using a file like dev.db.
  4. Push schema to local DB: Push your Drizzle Kit schema to this local dev.db.
  5. Update DATABASE_URL: Update the DATABASE_URL in .dev.vars to point to the local Turso HTTP API (e.g., http://127.0.0.1:8080).
  6. Gitignore local DB files: Add dev.db.* to .gitignore.
Terminal
# Example .dev.vars
DATABASE_URL="[http://127.0.0.1:8080](http://127.0.0.1:8080)"
DATABASE_AUTH_TOKEN="" # Turso local dev doesn't require auth token usually
CLERK_SECRET_KEY="sk_test_..."
# ... other local env variables

# package.json scripts
{
  "scripts": {
    "dev": "wrangler dev",
    "dev:db": "turso dev --db-file dev.db",
    "deploy": "wrangler deploy",
    "db:push:local": "drizzle-kit push:sqlite --config=drizzle.config.ts --url=file:./dev.db"
  }
}
7

Deploy to Cloudflare

Once configured, deploy your application using the Wrangler CLI.
  1. Authenticate: Ensure you are logged into Cloudflare via wrangler login.
  2. Add Secrets: Manually add your sensitive environment variables (like CLERK_SECRET_KEY, DATABASE_URL, DATABASE_AUTH_TOKEN for remote DBs) as secrets in the Cloudflare dashboard for your Worker.
  3. Deploy: Run pnpm run deploy.
Terminal
pnpm run deploy
Note: If you make changes to secrets in the Cloudflare dashboard, deploying from the CLI might override them unless explicitly configured not to. Always verify your deployed application by accessing its URL and testing endpoints.

Caveats

This guide is meant to be a comprehensive starting point for setting up your Hono API. It currently relies on the following services:
  • Database: PlanetScale (MySQL) or Neon (PostgreSQL) with Prisma or Drizzle as the ORM. For Cloudflare Workers local development, Turso is recommended.
  • Caching/Real-time data: Upstash (Redis)
  • Authentication: Clerk for user management and backend verification
  • Feature Flagging: Growthbook
  • Hosting: Vercel or Cloudflare Workers
  • Developer Experience: “Stoker” for boilerplate reduction, hono-pino for robust logging, and integration with Zod and OpenAPI for type safety and documentation.
The Hono application’s core definition is runtime-agnostic, allowing for flexible deployment to various environments. However, specific platform requirements (like Cloudflare’s environment variable handling) necessitate careful configuration.