 
 Migrating a Node.js App to Cloudflare Workers from Heroku
Redesigning a Node.js/MongoDB app for Cloudflare Workers and the things I wasn’t aware of at the beginning
  Patrick Chiu • 2022/10/14
 Patrick Chiu • 2022/10/14 
 Note: Cloudflare Workers is definitely not a direct alternative to Heroku. If we want minimal adjustment on the Node.js app, Google Cloud App Engine might be a better choice.
Introduction
For a very long time, Heroku has been my default choice for hosting my side projects due to its convenience and ease of use. Sadly, Heroku will discontinue its free product plans soon. This post is to journal my experience of migrating one of my side projects - Medium Rare to Cloudflare Workers 👷🏻.
Medium Rare is a web app, which indexes and distributes Chinese articles on the Medium platform. Currently it only supports Chinese language. My motivation comes from the fact that, Medium only primarily supports English articles, while the distribution of non-English articles is, well, little to none.
The Medium Rare backend is pretty simple. It only has 3 endpoints. And it is connecting to a MongoDB.
1. GET /articles
2. GET /writers
3. POST /articles/readLet’s use GET /articles as an example.
Redesigning the App
There are 2 areas — routing and database connection, which need to be adjusted in order to fit our app to Cloudflare Workers.
1. Routing
const express = require('express');
const app = express();
const port = 3000;
app.get('/articles', async (req, res) => {
  // 1. Some magical step to formulate the query
  const query = formulateQuery(req);
  // 2. Query the database
  const articles = await db.collection('articles').find(query).toArray();
  // 3. Return the articles
  return res.json({ articles });
});
app.listen(port, () => {
  console.log(`MR running on port ${port}`);
});Above is a demo GET /articles API written in Express.js. Here is what it looks like if we would like to achieve something similar in CF Workers:
export default {
  async fetch(request, env, context) {
    const { pathname } = new URL(request.url);
    // 1. Match the route by ourselves
    if (request.method === 'GET' && pathname.startsWith('/articles')) {
      // 2. Some magical steps
      const articles = await someMagicalSteps();
      // 3. Return the articles
      const json = JSON.stringify(articles, null, 2);
      return new Response(json, {
        headers: { 'content-type': 'application/json;charset=UTF-8' },
      });
    }
  },
};It doesn’t seem to be developer-friendly to use the native CF Workers fetch event runtime API. Luckily, there are a few routing libraries supporting CF Workers e.g. itty-router and Hono. I find myself leaning towards Hono, and here is what it looks like:
import { Hono } from 'hono';
const app = new Hono();
app.get('/articles', async (context) => {
  // 1. Some magical step to formulate the query
  const query = formulateQuery(context.req);
  // 2. Query the database, which will be discussed in Section 2
  const articles = await getArticlesFromDb(query);
  // 3. Return the articles
  return context.json({ articles });
});
export default app;Noice! It feels like home!
2. Database connection
When we are using CF Workers, we cannot directly connect to our MongoDB with the driver. Instead, CF Workers’ strategy is to support Databases that can connect over HTTP and MongoDB now offers Data API which lets you read or write data with standard HTTPS requests!
By the way, I created mongo-http.js - a thin wrapper on Mongodb Atlas Data API, which provides similar API as the Mongodb Node.js Driver.
Back to our original database query, it could look something like this:
const articles = await db
  .collection('articles')
  .find({ tags: { $in: ['cloudflare', 'heroku', 'nodejs'] } })
  .toArray();When we use MongoDB Data API, it would be something like this:
const payload = {
  collection: 'articles',
  database: 'medium',
  dataSource: 'Cluster0',
  filter: { tags: { $in: ['cloudflare', 'heroku', 'nodejs'] } },
};
const response = await fetch(env.MONGODB_URL + '/action/find', {
  method: 'POST',
  headers: {
    'Content-Type': 'application/json',
    'api-key': env.MONGODB_API_KEY,
  },
  body: JSON.stringify(payload),
});
const articles = await response.json().documents;Things I wasn’t aware of
- Environment variable is passed along the context/request, instead of getting from process.env— In the Database connection section above, we useMONGODB_URLandMONGODB_API_KEYto make an HTTPS request to MongoDB Data API. If native fetch event runtime API is used, it would be in the parameters
export default {
  fetch(request, env, context) {
    //
  },
};In Hono, env is inside the context parameter
app.get('/articles', (context) => {
  const { env } = context;
});- 
Updates of Environment variables in wrangler.tomlneeds to bewrangler publishto be in effect — I originally thoughtwrangler devis grabbing environment variables fromwrangler.toml. It looks like it is grabbing from the actual CF Workers in the cloud instead. This also leads to #3.
- 
wrangler devis also using your requests quota — Turns out that if we want fully local development, we should be using Miniflare. This also leads to #4 (the last one).
- 
If you wrangler publishmore than 1 environment, each of the API calls to the “local” server is going to use 1 request from every environment — For example, if you havewrangler published 3 environmentsapi-dev,api-stagingandapi-production, and you spin up a “local” server withwrangler dev. And then you fire a request to the “local” server, it will use up 3 requests quota.
Epilogue
There are a few things I haven’t covered - debugging and logging. In short, the debugging experience is great, since CF Workers’ Devtools is using the browser’s Devtools. For logging, CF Workers has a log-streaming dashboard for real-time logs, while we need to “bring your own logging service” to persist logs.
It is overall a great developer experience developing on CF Workers. I’m looking forward to exploring more when I migrate more side projects to CF Workers!
 
 