Tom MacWright

tom@macwright.com

Charitable trusts #

This is all a somewhat recent realization for me, so there may be angles I don’t see yet.

So I was reading Bailout Nation, Barry Ritholtz’s book about the 2008 financial crisis, and one of its points was that charitable trusts were one of the reasons why CDOs and other dangerous and exotic financial products were adopted with such vigor: the trusts were legally required to pay out a percentage of assets per year, like 5%, and previously they were able to do so using Treasury Bills or other forms of safe debt because interest rates were higher. But interest rates were dropped to encourage growth, which made it harder to find reliable medium-return instruments, which put the attention on things like CDOs that produced a similar ‘fixed’ return, but were backed by much scarier things.

And then I was sitting in a park with a friend talking about how this was pretty crazy, and that these trusts are required to pay at least that percentage. He said (he knows a lot more about this than I do) that the 5% number is an IRS stipulation, but it is often also a cap: so the trust can’t pay more than that. Which - he’s not very online, and this is paraphrasing - is an incredible example of control from the grave.

All in all, why do charitable trusts exist? Given the outcomes of:

  • A billionaire donating all of their money to a charity at their death, or even before
  • Or, a billionaire putting their money in a charitable trust that pays 5% a year

The outcomes are simply worse when you use a trust, right? Using trusts gives charities less money, because they have to accept a trickle of yearly donations instead of just receiving the money. In exchange for withholding the full amount from charities, heirs receive the power to direct the trust to one cause or another, and the prestige of the foundation’s name and continuing wealth. This isn’t better for charities, right?

It seems thoroughly wrong that our legal system allows people to exert control long after they’re dead, and to specifically exert the control to withhold money from active uses. Charitable trusts are better understood as ways to mete out little bits of charity in a way designed to benefit wealthy families.

As far as I remember it - Against Charity, a whole anti-charity book that I read, didn’t lay out this argument, even though it is definitely a thing that people are thinking.

A day using zed #

I am a pretty faithful neovim user. I’ve been using vim/neovim since around 2011, and tolerate all of its quirks and bugs because the core idea of modal editing is so magical, and it has staying power. VIM is 32 years old and will probably be around in 30 years: compare that to all of the editors that were in vogue before VS Code came around - TextMate, Atom, Brackets, Sublime Text. They’ve all faded in popularity, though most of them are still around and used by some people, though Atom is officially done.

But using VIM to develop TypeScript is painful at times: I’m relying on a bunch of Lua plugins that individuals have developed, and because Microsoft somehow has developed LSP, the standard for editors communicating with language tools, and also developed TypeScript, a language tool, and somehow, has still not made the “TypeScript server” speak LSP, these tools have to work around little inconsistencies about how TypeScript works, and instead of using Neovim’s built-in LSP support, they have TypeScript-specific tooling. Because Microsoft can’t get their wildly successful product to speak the wildly-successful specification that they also created.

So, I dabble with Zed, which is a new editor from the same team as Atom, but this time written in Rust and run as a standalone startup instead of a unit inside of GitHub.

Now Zed is pretty good. The latency of VS Code, Atom, and other web-technology-based editors is very annoying to me. Zed is extremely fast and its TypeScript integration is pretty good. Plus, it’s now open source. I think its chances of being around in 30 years are probably lower than VS Code’s: Zed is built by a pre-revenue startup, and releasing complex code as open source is absolutely no guarantee of longevity: there is no skilled community that will swoop in and maintain the work of a defunct startup, if they were to ever go defunct.

So, on days when neovim is annoying me, I use Zed. Today I’m using it and I’ll take some notes.

VIM support is overall pretty good

Zed’s vim mode is not half-assed. It’s pretty darn good. It’s not actually vim - it’s an emulation layer, and not a complete one, but enough that I’m pretty comfortable with navigation and editing right off the bat.

That said, it needs a little extra to be usable for me. I used the little snippet of extra keybindings in their docs to add a way to switch between panes in a more vim-like fashion:

[
  {
    "context": "Dock",
    "bindings": {
      "ctrl-w h": ["workspace::ActivatePaneInDirection", "Left"],
      "ctrl-w l": ["workspace::ActivatePaneInDirection", "Right"],
      "ctrl-w k": ["workspace::ActivatePaneInDirection", "Up"],
      "ctrl-w j": ["workspace::ActivatePaneInDirection", "Down"]
    }
  }
]

Still looking for a way to bind ctrl-j and ctrl-k to “previous tab” and “next tab”. The docs for keybindings are okay, but it’s hard to tell what’s possible: I don’t get autocomplete hints when I type workspace:: into a new entry in the keybindings file.

Shortcut hints in the UI are the default shortcuts, not the vim versions

  • Another mild annoyance with using Zed as a vim-replacement is that if I right-click something like a variable, the UI shows potential actions and the shortcuts to trigger those actions, but the shortcuts are the defaults, not the vim bindings. I don’t want to hit F2 to rename a symbol, I want to type cd.
  • Next thing I miss is my ripgrep-based search in neovim which I used constantly (through telescope). Unless you have a perfect file-per-React-component system or have a great memory for which files contain what, it’s pretty important to have great search. Zed’s search is fine, but it’s a traditional search mode, not geared for quick navigation.
  • Creating new files: in vim I almost always do :tabnew %, hit enter, which expands % into the current filename, and then edit the filename of the current file to produce a new name. This workflow doesn’t work in Zed: % doesn’t expand to the current filename. The only way I can find to save a new file is through the file dialog, which is super tedious. Without this tool in general, I’m kind of left with less efficient ways to do things: to delete the currently-open file, I’d usually do ! rm %, but I can’t.

Overall, Zed’s TypeScript integration is great, and it’s a very fast, pretty well-polished experience. When I tested it a few months ago, I was encountering pretty frequent crashes, but I didn’t encounter any crashes or bugs today. Eventually I might switch, but this trial made me realize how custom-fit my neovim setup is and how an editor that makes me reach for a mouse occasionally is a non-starter. I’ll need to find Zed-equivalent replacements for some of my habits.

Takeaway from using CO₂ monitors: run the exhaust fan #

For the last few years, I’ve had Aranet 4 and AirGradient sensors in my apartment. They’re fairly expensive gadgets that I have no regrets purchasing – I love a little more awareness of things like temperature, humidity, and air quality, it’s ‘grounding’ in a cyberpunk way. But most people shouldn’t purchase them: the insights are not worth that much.

So here’s the main insight for free: use your stove’s exhaust fan. Gas stoves throw a ton of carbon dioxide into your living space and destroy the air quality.

This is assuming you’re using gas: get an electric range if you can, but if you live in a luxury New York apartment(1) like me, you don’t have a choice.

I used to run the exhaust fan only when there was smoke or smell from cooking. This was a foolish decision in hindsight: run the exhaust fan whenever you’re cooking anything. Your lungs and brain will thank you.

  1. luxury here means basic, run-down apartments with no amenities that are now expensive because of the housing shortage caused by rampant obstructionism of housing in dense cities

Hawbuck wallets #

There are a lot of companies pitching new kinds of wallets, with lots of ads - Ridge is one of the most famous. An option that never seems to come up in internet listicles but I’ve sworn by for years is the Hawbuck wallet.

My personal preferences for this kind of thing is:

  1. Lightweight
  2. Durable
  3. Ideally, vegan

Hawbuck checks all the boxes. I used mighty wallets before this and got a year or two out of them, but Hawbuck wallets wear much, much slower than that. Dyneema is a pretty magical material.

I’m happy it’s also not leather. I still buy and use leather things, despite eating vegan, simply because the vegan alternatives for things like shoes and belts tend to be harder to get and they don’t last as long.

(this is not “sponsored” or anything)

Notes on using Linear #

We’ve been using Linear for a month or two at Val Town, and I think it has ‘stuck’ and we’ll keep using it. Here are some notes about it:

  • The keyboard shortcuts are as good as people say they are: you can do things like hover your mouse over a row in a list, hit a keyboard shortcut, and it’ll apply to the hovered target item. This is really impressive stuff.
  • I do quite like the desktop app - it feels pretty polished, for an Electron (or Tauri, not sure) application. I’ve been using Vimium pretty heavily again - a Chrome extension that adds VIM-style keybindings to Chrome. It is a godsend for my RSI, which is triggered when I use a mouse, but Vimium totally botches most website built-in keybindings. Keybindings are a really hard problem in general but using a standalone wrapped-web app seems like a good way to make them more reliable, by insulating them from whatever funky Chrome extensions you’re using at the moment.
  • I dearly miss the ability to permalink a section of code, paste it into a comment, and GitHub to turn that into an inline code snippet. It was so nice for discussing things like regressions, because you could point right to the ground-truth.
  • I have really mixed feelings about Linear’s editor, which has some Markdown abilities - typing _ around a string will make it italicized - but not others - Markdown links don’t become links, unlike Notion’s Markdown-ish editor. I get that it’s built to be friendly for both managers and engineers, thus creating an interface between the people who know Markdown and the ones who don’t. But it’s an awkward middle ground that forces me to use a mouse more than I’d like
  • Man, there are so many ways to organize stuff. I have in the past had mixed experiences with Linear for just this reason - it allows people to create labyrinthine systems of organization, and then try to apply their tags and milestones and projects to the real world and say “make it happen,” and, alas, the map doesn’t turn into the terrain. But on the other hand, the “cycles” system, which is kind of like a time-constrained milestone that restarts every week or two - I like that. It injected some good energy into the organization.
  • It really is extremely pretty - it’s up there with Notion in terms of applications that just look expensive, like a top-of-the-line Volvo (this is not a dig, I think Volvos, and Polestars, look great).
  • The realtime sync is usually great, but sometimes two people are editing the same ticket at the same time, and it’s just weird. Realtime sync for the “state of the world” seems good, realtime sync which feels like stepping on everyones toes or peeping over someone’s shoulder, not that great.
  • It’s pretty similar on mobile to GitHub’s experience, for now. They’re teasing a native app, which I hope is great.

React is old #

My last big project at Mapbox was working on Mapbox Studio. We launched it in 2015.

For the web stack, we considered a few other options - we had used d3 to build iD, which worked out great but we were practically the only people in the internet using d3 to build HTML UIs - I wrote about this, in “D3 for HTML” in 2013. The predecessor to Mapbox Studio, TileMill, was written with Backbone, which was cool but is almost never used today. So, anyway, we decided on React when we started it, which was around 2014.

So it’s been a decade. Mapbox Studio is still around, still on React, albeit many refactors later. If I were to build something new like it, I’d be tempted to use Svelte or Solid, but React would definitely be in the running.

Which is to say: wow, this is not churn. What is the opposite of churn? Starting a codebase and still having the same tech be dominant ten years later? Stability? For all of words spilled about trend-chasing and the people talking about how one of these days, React will be out of style, well… it’s been in style for longer than a two-term president.

When I make tech decisions, the rubric is usually that if it lasts three years, then it’s a roaring success. I know, I’d love for the world to be different, but this is what the industry is and a lot of decisions don’t last a year. A decision that seems reasonable ten years later? Pretty good.

Anyway, maybe next year or the year after there’ll be a real React successor. I welcome it. React has had a good, very long, run.

Hooking up search results from Astro Starlight in other sites #

At Val Town, we recently introduced a command-k menu, that “omni” menu that sites have. It’s pretty neat. One thing that I thought would be cool to include in it would be search results from our documentation site, which is authored using Astro Starlight. Our main application is React, so how do we wire these things together?

It’s totally undocumented and this is basically a big hack but it works great:

Starlight uses Pagefind for its built-in search engine, which is a separate, very impressive, mildly documented open source project. So we just load the pagefind.js file that Starlight bakes, using an ES import, across domains, and then just use the pagefind API. So we’re loading both the search algorithms and the content straight from the documentation website.

Here’s an illustrative component, lightly adapted from our codebase. This assumes that you’ve got your search query passed to the component as search.

import { Command } from "cmdk";
import { useDebounce } from "app/hooks/useDebounce";
import { useQuery } from "@tanstack/react-query";

function DocsSearch({ search }: { search: string }) {
  const debouncedSearch = useDebounce(search, 100);
  
  // Use react-query to dynamically and lazily load the module
  // from the different host
  const pf = useQuery(
    ["page-search-module"],
    // @ts-expect-error
    () => import("https://docs.val.town/pagefind/pagefind.js")
  );

  // Use react-query again to actually run a search query
  const results = useQuery(
    ["page-search", debouncedSearch],
    async () => {
      const { results }: { results: Result[] } =
        await pf.data.search(debouncedSearch);
      return Promise.all(
        results.slice(0, 5).map((r) => {
          return r.data();
        })
      );
    },
    {
      enabled: !!(pf.isSuccess && pf.data),
    }
  );

  if (!pf.isSuccess || !results.isSuccess || !results.data.length) {
    return null;
  }

  return results.data.map((res) => {
    return (
      <Command.Item
        forceMount
        key={res.url}
        value={res.url}
        onSelect={() => {
          window.open(res.url);
        }}
      >
        <a
          href={res.url}
          onClick={(e) => {
            e.preventDefault();
          }}
        >
          <div
            dangerouslySetInnerHTML={{ __html: res.excerpt }}
          />
        </a>
      </Command.Item>
    );
  });
}

Pretty neat, right? This isn’t documented anywhere for Astro Starlight, because it’s definitely relying on some implementation details. But the same technique would presumably work just as well in any other web framework, not just React.

The S&P 500 is largely a historical artifact #

I see the S&P 500 referenced pretty frequently as an vanilla index for people investing. This isn’t totally wrong, which is why this post is short. But, if you have the goal of just “investing in the market,” there’s a better option for doing that: a total market index. For Vanguard, instead of VOO, it’d be VTI. For Schwab, it’s SCHB. Virtually every provider has an option. For background, here’s Jack Bogle discussing the topic.

The S&P 500 is not a quantitative index of the top 500 companies: it has both selection criteria and a committee that takes a role in selection. In contrast, total market indices are typically fully passive and quantitative, and they own more than 500 companies.

So if you want to “own the market,” you can just do that. Not investment advice.

Web pages and video games #

An evergreen topic is something like “why are websites so big and slow and hard and video games are so amazing and fast?” I’ve thought about it more than I’d like. Anyway, here are some reasons:

  • Web pages are just-in-time delivered, with no installation required. Modern video games typically require both a long install process, downloading tens of gigabytes onto the console. Even after that, they take minutes to boot up: when I play Cyberpunk on my XBox S, the loading screen takes at least a minute. It’s fast after that, but it’s fast because of that loading phase.
  • Video game development cycles are long and extraordinarily expensive. A recent failed game that didn’t make a serious media stir cost over 140 million dollars to make. There is no startup that will pour over $100 million on a website before even launching it. And AAA video games, which are often the ones that people have in mind, take years to develop.
  • Video games are generally single-tasked: you don’t have 10 of them open at a time, ready to switch tabs at any moment. They’re usually full-screen too, so they don’t even need to be composited with other graphics in a window manager, you can just shoot pixels straight to the screen.
  • The web is an aggressively heterogenous platform, moreso than nearly anything else. Webpages by default support any screen size, input method, light and dark mode, and pixel density. Large websites are expected to support multiple languages, and scale down to cheap feature-phones. So websites, when they do express native-like code, need to do so through WASM rather than some platform-specific binary. This is a lot of the power, and the struggle, of the web: you are writing code for an unimaginably wide range of devices.

About Placemark.io #

Someone asked over email about why I stopped building Placemark as a SaaS and made it an open source project. I have no qualms with sharing the answers publicly, so here they are:

I stopped Placemark because it didn’t generate enough revenue and I lost faith that there was a product that could be implemented simply enough, for enough people, who wanted to pay enough money, for it to be worthwhile. There are many mapping applications out there and I think that collectively there are some big problems in that field:

  1. The high end is captured by Esri, whose customers dislike the tool but tolerate it and are locked into it, and Esri is actually very good at what it does.
  2. The low end is captured by free tools that are subsidized by big companies like Google, or run by open source communities like QGIS, which causes users to generally expect similar software to be super cheap. VC-funded startups are able to underprice their software for a few years and spend tens of millions building it. Placemark was fully bootstrapped, self-funded, and built just by me.
  3. There are vast differences in user expectations that make it very hard to make a software product in the middle, between the complexity of QGIS and the simplicity of Google Maps - people want some combination of analytics, editing, social features, etc that are all hard to combine into anything simple.
  4. It is very hard to build a general-purpose piece of software like Placemark. If I were to do it again, I’d do something in a niche, targeting one specific kind of customer in one specific industry.

I do want to emphasize that I knew most of this stuff going into it, and it’s, like good that geo has a lot of open source projects, and I don’t have any ill feelings toward really any of the players in the field. It’s a hard field!

As I’ve said a bunch of times, the biggest problem with competition in the world of geospatial companies is that there aren’t many big winners. We would all have a way different perspective on geospatial startups if even one of them had a successful IPO in the last decade or two, or even if a geospatial startup entered the mainstream in the same way as a startup like Notion or Figma did. Esri being a private company is definitely part of this - they’re enormous, but nobody outside of the industry talks about them because there’s no stock and little transparency into their business.

Also, frankly, it was a nerve-wracking experience building a bootstrapped startup, in an expensive city, with no real in-person community of people doing similar kinds of stuff. The emotional ups and downs were really, really hard: every time that someone signed up and cancelled, or found a bug, or the servers went down as I was on vacation and I had to plug back in.

You have to be really, really motivated to run a startup, and you have to have a very specific attitude. I’ve learned a lot about that attitude - about trying to get positivity and resilience, after ending Placemark. It comes naturally to some people, who are just inherently optimistic, but not all of us are like that.

Remix notes #

Val Town switched to Remix as a web framework a little over a year ago. Here are some reflections:

  • The Remix versioning scheme is a joy. They gradually roll out features under feature flags, so you have lots of time to upgrade.
  • Compared to what seems like chaos over in Next.js-land, Remix hasn’t had many big breaking changes or controversies. It’ll eventually adopt RSC, but I am glad that it is not the ‘first mover’ in that regard.
  • We’ve hit a few bugs, in types and utf-8 support, and docs, but Remix is rarely the culprit when there’s an outage or some quirk in the application.
  • We haven’t switched to Vite but are pretty excited to. Initially the fact that Remix is obviously a “collection of parts,” like react-router and esbuild, rather than a “monolithic framework,” was a source of uncertainty, but I now see it as a strength. It is cool that Vite will allow Remix to have a narrower role.
  • The actions/loaders/Forms paradigm is pretty great, but I am still not happy about FormData and not excited about how actions are typed in TypeScript and how much serialization gunk there is. God, that Twitter thread was irritating. I sometimes think conform might help here. We use tRPC a bunch because its DX is superior to Remix’s: the types “just work”, there’s no FormData inference & de-inference required, it’s easy to just call a method.
  • The Remix community is pretty good, there are some good guides and documentation sites to be had, especially epic stack. I’m not especially worried about the project losing steam - any concerns I had right after Shopify bought Remix are gone. I also don’t think that any of the React-alternative frameworks are that tempting yet, though I’m keeping an eye on SolidStart and SvelteKit. Please, please do not @ me about how much you like Vue, I do not care at all and you do not need my approval to keep liking Vue.
  • Most products have a Remix integration - instrumentation with OpenTelemetry just works, Sentry’s tracing integration just works. Clerk has an integration but using Clerk is one of my top regrets for this application: we’ve encountered so many bugs, and so little momentum on support and fixes.
  • We barely use nested layouts, one of Remix’s main features. There just aren’t that many opportunities to. We also don’t really use loaders for the list of vals, either: I think that full stack components are the answer here, but we haven’t implemented that yet.

In summary: Remix mostly gets out of the way and lets us do our work. It’s not a silver bullet, and I wish that it was more obvious how to do complex actions and it had a better solution to the FormData-boilerplate-typed-actions quandry. But overall I’m really happy with the decision.

Running motivation hacks #

Things that have worked to get me back on a running regimen and might work for you:

  1. Try to run all the streets in my neighborhood. I use CityStrides, there are many similar apps.
  2. Run the same exact route, every time, at any speed: focus on consistency-only. Repetition legitimizes.
  3. Focus on just one Strava Segment and try to either become the “Local Legend” or get a good time.

Incentives #

My friend Forest has been making some good thoughts about open source and incentives. Coincidentally, this month saw a new wave of open source spam because of the tea.xyz project, which encouraged people to try and claim ‘ownership’ of existing open source projects, to get crypto tokens.

The creator of tea.xyz, Max Howell, originally created Homebrew, the package manager for macOS which I use every day. He has put in the hours and days, and been on the other side of the most entitled users around. So I give him a lot of leeway with tea.xyz’s stumbles, even though they’re big stumbles.

Anyway, I think my idea is that murky incentives are kind of good. The incentives for contributing to open source right now, as I do often, are so hard to pin down. Sure, it’s improving the ecosystem, which satisfies my deep sense of duty. It’s maintaining my reputation and network, which is both social and career value. Contributing to open source is a way to learn, too: I’ve had one mentor early in my career, but besides that I’ve learned the most from people I barely know.

The fact that the incentives behind open source are so convoluted is what makes them sustainable and so hard to exploit. The web is an adversarial medium, is what I tell myself pretty often: every reward structure and application architecture is eventually abused, and that abuse will destroy everything if unchecked: whether it’s SEO spam, or trolling, or disinformation, no system maintains its own steady state without intentional intervention and design.

To bring it back around: tea.xyz created a simple, automatic incentive structure where there was previously a complex, intermediated one. And, like every crypto project that has tried that before, it appealed to scammers and produced the opposite of a community benefit.

If I got paid $5 for every upstream contribution to an open source project, I’d make a little money. It would be an additional benefit. But I’m afraid that the simplicity of that deal - the expectations that it would create, the new community participants that it would invite - would make me less likely to contribute, not more.


Code-folding JSX elements in CodeMirror #

This came up for Val Town - we implemented code folding in our default editor which uses CodeMirror, but wanted it to work with JSX elements, not just functions and control flow statements. It’s not enough to justify a module of its own because CodeMirror’s API is unbelievably well-designed:

import {
  tsxLanguage,
} from "@codemirror/lang-javascript";
import {
  foldInside,
  foldNodeProp,
} from "@codemirror/language";

/** tsxLanguage, with code folding for jsx elements */
tsxLanguage.configure({
  props: [
    foldNodeProp.add({
      JSXElement: foldInside,
    }),
  ],
})

Then you can plug that into a LanguageSupport instance and use it. Amazing.

CSS Roundup #

I’ve been writing some CSS. My satisfaction with CSS ebbs and flows: sometimes I’m happy with its new features like :has, but on the other hand, CSS is one of the areas where you really often get bitten by browser incompatibilities. I remember the old days of JavaScript in which a stray trailing comma in an array would break Internet Explorer: we’re mostly past that now. But in CSS, chaos still reigns: mostly in Safari. Anyway, some notes as I go:

Safari and <details> elements

I’ve been using details more instead of Radix Collapsible for performance reasons. Using the platform! Feels nice, except for CSS. That silly caret icon shows up in Safari and not in Chrome, and breaks layout there. I thought the solution would involve list-style-type: none or appearance, but no, it’s something worse:

For Tailwind:

[&::-webkit-details-marker]:hidden

In CSS:

details::-webkit-details-marker {
  display: hidden;
}

flex min-content size

I’ve hit this bug so many times in Val Town. Anything that could be user-supplied and might bust out of a flex layout should absolutely have min-width: 0, so that it can shrink.

There’s a variant of this issue in grids and I’ve been defaulting to using minmax(0, 1fr) instead of 1fr to make sure that grid columns will shrink when they should.

Using Just #

I’ve been using just for a lot of my projects. It helps a bunch with the context-switching: I can open most project directories and run just dev, and it’ll boot up the server that I need. For example, the blog’s justfile has:

dev:
  bundle exec jekyll serve --watch --live --future

I used to use Makefiles a bit, but there’s a ton of tricky complexity in them, and they really weren’t made as cheat-sheets to run commands - Makefiles and make are really intended to build (make) programs, like run the C compiler. Just scripts are a lot easier to write.

Headlamps are better flashlights #

A brief and silly life-hack: headlamps are better flashlights. Most of the time when you are using a flashlight, you need to use your hands too. Headlamps solve that problem. They’re bright enough for most purposes and are usually smaller than flashlights too. There are very few reasons to get a flashlight. Just get a headlamp.

Don't use marked #

With all love to the maintainers, who are good people and are to some extent bound by their obligation to maintain compatibility, I just have to put it out there: if you have a new JavaScript/TypeScript project and you need to parse or render Markdown, why are you using marked?

In my mind, there are a few high priorities for Markdown parsers:

  • Security: marked isn’t secure by default. Yes, you can absolutely run DOMPurify on its output, but will you forget? Sure!
  • Standards: it’s nice to follow Commonmark! The original Markdown specification was famously permissive and imprecise. If you want to be able to switch Markdown renderers in the future, it’s going to be a lot nicer if you have a tight standard to rely on, to guarantee that you’ll get the same output.
  • Performance: Markdown rendering probably isn’t a bottleneck for your application, but it shouldn’t be.

So, yeah. Marked is pretty performant, but it’s not secure, it’s doesn’t follow a standard - we can do better!

Use instead:

  • micromark: the “micro” Markdown parser primarily by wooorm, which is tiny, follows Commonmark. It’s great. Solid default.
  • remark: the most extensible Markdown parser you could ever imagine, also by wooorm.
  • markdown-it: don’t like wooorm’s style? markdown-it is pretty good too, secure by default, and commonmark-supporting.

marked is really popular. It used to be the best option. But there are better options, use them!

Replay.web is cool #

I’ve been trying to preserve as much of Placemark now that it’s open-source. This has been a mixed experience: some products were really easy to move away from, like Northwest and Earth Class Mail. Webflow was harder to quit. But replay.web came to the rescue, thanks to Henry Wilkinson at web.recorder.

Now placemark.io is archived, but nearly complete and at feature-parity, but costs next to nothing to maintain. The magic is the wacz format, which is a specific flavor of ZIP file that is readable with range requests. From the geospatial world, I’ve been thinking about range requests for a long time: they’re the special sauce in Protomaps and Cloud Optimized GeoTIFFs. They let you use big files, stored cheaply on object storage like Amazon S3 or Cloudflare R2, but lets browsers read those files incrementally, saving the browser time & memory and saving you transfer bandwidth & money.

So, the placemark.io web archive is on R2, the website is now on Cloudflare Pages, and the archive is little more than this one custom element:

<replay-web-page source="https://archive.placemark.io/placemark%202024-01-19.wacz" url="https://www.placemark.io/"></replay-web-page>|

This is embedding replayweb.page. Cool stuff!

On Web Components #

God, it’s another post about Web Components and stuff, who am I to write this, who are you to read it

Carlana Johnson’s “Alternate Futures for Web Components” had me nodding all the way. There’s just this assumption that now that React is potentially on its way out (after a decade-long reign! not bad), the natural answer is Web Components. And I just don’t get it. I don’t get it. I think I’m a pretty open-minded guy, and I’m more than happy to test everything out, from Rails to Svelte to htmx to Elm. It’s all cool and good.

But “the problems I want to solve” and “the problems that Web Components solve” are like two distinct circles. What do they do for me? Binding JavaScript-driven behavior to elements automatically thanks to customElement? Sure - but that isn’t rocket science: you can get nice declarative behavior with htmx or hyperscript or alpine or stimulus. Isolating styles with Shadow DOM is super useful for embed-like components, but not for parts of an application where you want to have shared style elements. I shouldn’t sloppily restate the article: just read Carlana.

Anyway, I just don’t get it. And I find it so suspicious that everyone points to Web Components as a clear path forward, to create beautiful, fast applications, and yet… where are those applications? Sure, there’s “Photoshop on the Web”, but that’s surely a small slice of even Photoshop’s market, which is niche in itself. GitHub used to use Web Components but their new UIs are using React.

So where is it? Why hasn’t Netflix rebuilt itself on Web Components and boosted their user numbers by dumping the heavy framework? Why are interactive visualizations on the New York Times built with Svelte and not Web Components? Where’s the juice? If you have been using Web Components and winning, day after day, why not write about that and spread the word?

People don’t just use Rails because dhh is a convincing writer: they use it because Basecamp was a spectacular web application, and so was Ta-Da List, and so are Instacart, GitHub, and shopify. They don’t just use React because it’s from Facebook and some brain-virus took them over, they use it because they’ve used Twitter and GitHub and Reddit and Mastodon and countless other sites that use React to create amazing interfaces.

Of course there’s hype and bullying and all the other social dynamics. React fans have had some Spectacularly Bad takes, and, boy, the Web Components crowd have as well. When I write a tender post about complexity and it gets summed up as “going to bat for React” and characterized in bizarre overstatement, I feel like the advocates are working hard to alienate their potential allies. We are supposed to get people to believe in our ideas, not just try to get them to lose faith in their own ideas!

I don’t know. I want to believe. I always want to believe. I want to use an application that genuinely rocks, and to find out that it’s WC all the way, and to look at the GitHub repo and think this is it, this is the right way to build applications. I want to be excited to use this technology because I see what’s possible using it. When is that going to happen?

“If you want to build a ship, don’t drum up people to collect wood and don’t assign them tasks and work, but rather teach them to long for the endless immensity of the sea.” - Antoine de Saint Exupéry

What editors do things use? #

How to set headers on objects in R2 using rclone #

How do you set a Cache-Control header on an object in R2 when you’re using rclone to upload?

I burned a lot of time figuring this out. There are a lot of options that look like they’ll do it, but here it is:

--header-upload='Cache-Control: max-age=604800,public,immutable'

That’s the flag you want to use with rclone copy to set a Cache-Control header with Cloudflare R2. Whew.

Reason: sure, you can set cache rules at like 5 levels of the Cloudflare stack - Cache Rules, etc. But it’s really hard to get the right caching behavior for static JavaScript bundles, which is:

  • 404s aren’t cached
  • Non-404s are cached heavily

This does it. Phew.

Chrome Devtools protip: Emulate a focused page #

This is a Devtools feature that you will only need once in a while, but it is a life-saver.

Some frontend libraries, like CodeMirror, have UIs like autocompletion, tools, or popovers, that are triggered by typing text or hovering your mouse cursor, and disappear when that interaction stops. This can make them extremely hard to debug: if you’re trying to design the UI of the CodeMirror autocomplete widget, every time that you right-click on the menu to click “Inspect”, or you click away from the page to use the Chrome Devtools, it disappears.

Learn to love Emulate a focused page. It’s under the Rendering tab in the second row of tabs in Devtools - next to things like Console, Issues, Quick source, Animation.

Click the Rendering tab, find the Emulate a focused page checkbox, and check it. This will keep Chrome from firing blur events and letting CodeMirror or your given library from knowing that you’ve clicked out of the page. And now you can debug! Phew.

How could you make a scalable online geospatial editor? #

I’ve been thinking about this. Placemark is going open source in 10 days and I’m probably not founding another geo startup anytime soon. I’d love to found another bootstrapped startup eventually, but geospatial is hard.

Anyway, geospatial data is big, which does not combine well with real-time collaboration. Products end up either sacrificing some data-scalability (like Placemark) or sacrificing some edibility by making some layers read-only “base layers” and focusing more on visualization instead. So web tools end up being more data-consumers and most of the big work like buffering huge polygons or processing raster GeoTIFFs stays in QGIS, Esri, or Python scripts.

All of the new realtime-web-application stuff and the CRDT stuff is amazing - but I really cannot emphasize enough how geospatial data is a harder problem than text editing or drawing programs. The default assumption of GIS users is that it should be possible to upload and edit a 2 gigabyte file containing vector information. And unlike spreadsheets or lists or many other UIs, it’s also expected that we should be able to see all the data at once by zooming out: you can’t just window a subset of the data. GIS users are accustomed to seeing progress bars - progress bars are fine. But if you throw GIS data into most realtime systems, the system breaks.

One way of slicing this problem is to pre-process the data into a tiled format. Then you can map-reduce, or only do data transformation or editing on a subset of the data as a ‘preview’. However, this doesn’t work with all datasets and it tends to make assumptions about your data model.

I was thinking, if I were to do it again, and I won’t, but if I did:

I’d probably use driftingin.space or similar to run a session backend and use SQLite with litestream to load the dataset into the backend and stream out changes. So, when you click on a “map” to open it, we boot up a server and download database files from S3 or Cloudflare R2. That server runs for as long as you’re editing the map, it makes changes to its local in-memory database, and then streams those out to S3 using litestream. When you close the tab, the server shuts down.

The editing UI - the map - would be fully server-rendered and I’d build just enough client-side interaction to make interactions like point-dragging feel native. But the client, in general, would never download the full dataset. So, ideally the server runs WebGL or perhaps everything involved in WebGL except for the final rendering step - it would quickly generate tiles, even triangulate them, apply styles and remove data, so that it can send as few bytes as possible.

This would have the tradeoff that loading a map would take a while - maybe it’d take 10 seconds or more to load a map. But once you had, you could do geospatial operations really quickly because they’re in memory on the server. It’s pretty similar to Figma’s system, but with the exception that the client would be a lot lighter and the server would be heavier.

It would also have the tradeoff of not working offline, even temporarily. I unfortunately don’t see ‘offline-first’ becoming a real thing for a lot of use-cases for a long time: it’s too niche a requirement, and it is incredibly difficult to implement in a way that is fast, consistent, and not too complex.

codemirror-continue #

Wrote and released codemirror-continue today. When you’re writing a block comment in TypeScript and you hit “Enter”, this intelligently adds a * on the next line.

Most likely, your good editor (Neovim, VS Code) already has this behavior and you miss it in CodeMirror. So I wrote an extension that adds that behavior. Hooray!

I wish there was a better default for database IDs #

Every database ID scheme that I’ve used has had pretty serious downsides, and I wish there was a better option.

The perfect ID would:

  • Be friendly to distributed systems - multiple servers should be able to generate non-overlapping IDs at the same time. Even clients should be able to generate IDs.
  • Have good index locality. IDs should be semi-ordered so that new ones land in a particular shard or end up near the end of your btree index.
  • Have efficient database storage: if it’s a number, it’s stored as a number. If it’s binary, it should be stored as binary. Storing hexadecimal IDs as strings is a waste of space: Base16 takes up twice as much space as binary.
  • Be roughly standardized and future-proof. Cleverness is great, but IDs and data schemas tend to last a long time, and if they don’t last that long, need to survive migrations. A rare boutique ID scheme is a risk.
  • Obscure order and addresses - in other words, not be an auto-incrementing number. It is bad to reveal how many things are in a database, and also bad to give people a way to enumerate and find things by tweaking a number in a URL.

Almost nothing checks all these boxes:

  • Auto-incrementing bigints are almost perfect, but they aren’t friendly to distributed systems because only one computer knows what the next number is. They also reveal how many things are in a database. You can use Sqids to fix that, though - a surprisingly rare approach.
  • All of the versions of UUIDs that are fully standardized have pretty bad index behavior, and cause poor index locality - even v1. But they’re very distributed-systems friendly, and they definitely obscure numbering.
  • Orderable new schemes like ulid are cool, but there isn’t a straightforward way to store them as binary, in Postgres. UUIDs are stored as binary, and they’re relatively niche - there’s no postgres implementation of ulids, for example. ULID can be stored in UUID columns, but isn’t valid as a UUID.
  • UUID v7 looks like it checks every box, but it’s not fully standardized or broadly available yet. The JavaScript implementations are great but have very little uptake, and Postgres, both by default and in the uuid-ossp module, doesn’t support it.

So for the time being, what are we to do? I don’t have a good answer. Cross our fingers and wait for uuid v7.


Increasingly miffed about the state of React releases #

I am, relative to many, a sort of React apologist. Even though I’ve written at length about how it’s not good for every problem, I think React is a good solution to many problems. I think the React team has good intentions. Even though React is not a solution to everything, it a solution to some things. Even though React has been overused and has flaws, I don’t think the team is evil. Bask in my equanimity, etc.

However,

The state of React releases right now is bad. There are two major competing React frameworks: Remix, funded by Shopify and Next.js, funded by Vercel. Vercel hired many members of the React team.

It has been one and a half years since the last React release, far longer than any previous release took.

Next.js is heavily using and promoting features that are in the next release. They vendor a version of the next release of React and use some trickery to make it seem like you’re using React 18.2.0 when in fact you’re using a canary release. These “canary releases” are used for incremental upgrades at Meta, too, where other React core developers work.

On the other hand, the non-Vercel and non-Facebook ecosystems don’t have these advantages. Remix suffers from an error in React that is fixed, but not released. People trying to use React 18.3.0 canary releases will have to use npm install --force or overrides in their package.json files to tie it all together.

This strategy, of using Canary releases for a year and a half and then doing some big-bang upgrade to React 19.0.0: I don’t like it. Sure, there are workarounds to use “current” Canary React. But they’re hacks, and the Canary releases are not stable and can quietly include breaking changes. All in all, it has the outward appearance of Vercel having bought an unfair year headstart by bringing part of the React team in-house.

Luxury of simplicity #

An evergreen blog topic is “writing my own blogging engine because the ones out there are too complicated.” With the risk of stating the obvious:

Writing a blog engine, with one customer, yourself, is the most luxuriously simple web application possible. Complexity lies in:

  • Diversity of use-cases: applications that need to work on multiple devices, different languages, work with screen readers, might need to work offline, or on a particular network.
  • The real world. Everything about the real world is complicated: time, names, geography, everything. Governments can’t simplify this very much, so they build extremely complicated technology so they can serve every citizen (in theory). Companies can “define their customers” and simplify this a bit. Individuals can simplify this a lot: just me, my timezone, my language.
  • The problem area. Something like how Microsoft Word decides cursor position, or Excel calculates formulas - there are actually hard problems out there, with real complexity to solving them, with no simple solution.

Which is all to say, when I read some rant about how React or Svelte or XYZ is complicated and then I see the author builds marketing websites or blogs or is a Java programmer who tinkers with websites but hates it – it all stinks of narrow-mindedness. Or saying that even Jekyll is complicated, so they want to build their own thing. And, go ahead - build your own static site generator, do your own thing. But the obvious reason why things are complicated isn’t because people like complexity - it’s because things like Jekyll have users with different needs.

Yes: JavaScript frameworks are overkill for many shopping websites. It’s definitely overkill for blogs and marketing sites. It’s misused, just like every technology is misused. But not being exposed to the problems that it solves does not mean that those problems don’t exist.

HTML-maximalists arguing that it’s the best way probably haven’t worked on a hard enough problem to notice how insufficient SELECT boxes are. Or how the dialog element just doesn’t help much. Complaining about ARIA accessibility based on out-of-date notions when the accessibility of modern UI libraries is nothing short of fantastic. And what about dealing with complex state? Keybindings with different behaviors based on UI state. Actions that re-render parts of the page - if you update “units” from miles to meters and you want the map scale, the title element, and the measuring tools to all update seamlessly. HTML has no native solution for client-side state management, and some applications genuinely require it.

And my blog is an example of the luxury of simplicity – it’s incredibly simple! I designed the problem to make it simple so that the solution could be simple. If I needed to edit blog posts on my phone, it’d be more complicated. Or if there was a site search function. Those are normal things I’d need to do if I had a customer. And if I had big enough requirements, I’d probably use more advanced technology, because the problem would be different: I wouldn’t indignantly insist on still using some particular technology.

Not everyone, or every project, has the luxury of simplicity. If you think that it’s possible to solve complicated problems with simpler tools, go for it: maybe there’s incidental complexity to solve. If you can solve it in a convincing way, the web developers will love you and you might hit the jackpot and be able to live off of GitHub sponsors alone.

See also a new start for the web, where I wrote about “the document web” versus “the application web.”

How are we supposed to do tooltips now? #

I’ve been working on oldfashioned.tech, which is sort of a testbed to learn about htmx and the other paths: vanilla CSS instead of Tailwind, server-rendering for as much as possible.

How are tooltips and modals supposed to work outside of the framework world? What the Web Platform provides is insufficient:

  • The title attribute is unstyled and only shows up after hovering for a long time over an element.
  • The dialog element is bizarrely unusable without JavaScript and basically doesn’t give you much: great libraries like Radix don’t use it and use role="dialog" instead.

So, what to do? There’s:

  • The pure CSS option. Seems like balloon.css is the main example. Unmaintained for three years, but maybe that works? Wouldn’t have the right placement for tooltips if they’re on an edge of the screen. Tooltips also can’t contain HTML or styling.
  • Or maybe I should use floating-ui and write a little extension. The DOM-only version of floating-ui is tiny, and the library is very high quality and used everywhere - it’s what Radix uses.

I think it’s kind of a bummer that there just aren’t clear options for this kind of thing.

The module pattern really isn't needed anymore #

I wrote about this pattern years ago, and wrote an update, and then Classes became broadly available in JavaScript. I was kind of skeptical of class syntax when it came out, but now there really isn’t any reason to use any other kind of “class” style than the ES6 syntax. The module pattern used to have a few advantages:

  • You didn’t need to keep remembering what this was referring to - before arrow functions this was a really confusing question.
  • You could have private variables.

Well, now classes can use arrow functions to simplify the meaning of this , and private properties are supported everywhere, we can basically declare the practice of using closures as psuedo-classes to be officially legacy.

patch-package can bail you out of some bad situations #

Let’s say you’re running some web application and suddenly you hit a bug in one of your dependencies. It’s all deployed, lots of people are seeing the downtime, but you can’t just push an update because the bug is in something you’ve installed from npm.

Remember patch-package. It’s an npm module that you can install in which you:

  • Edit the dependency source code directly in node_modules
  • Run npx patch-package some-package
  • Add "postinstall": "patch-package" to your scripts

And from now on when npm install runs, it tweaks and fixes the package with a bug. Obviously submit a pull request and fix the problem at its source later, but in times of desperation, this is a way to fix the problem in a few minutes rather than an hour. This is from experience… experience from earlier today.

SaaS exits #

I’ve been moving things for Placemark’s shutdown as a company and noting some of the exit experiences:

  • Loom is surprisingly hard to exit from. There’s no bulk export option, no way to export metadata.
  • Webflow doesn’t support exporting sites with CMS collections (blogs, docs, etc). It supports exporting the CMS content, and the templates, but not the two together.
  • Earth Class Mail has a pretty respectable offboarding flow that does a good job warning you of the ramifications of closing the virtual address.
  • Legalinc’s service to close down the LLC was fast and cost about $600. Maybe there are cheaper options, but I’m satisfied with the speed & ease of use.
  • Northwest registered agent was also super clear and easy to close down. I had a great experience with them from start to finish.

You can finally use :has() in most places #

The hot new thing in CSS is :has() and Firefox finally supports it, starting today - so the compatibility table is pretty decent (89% at this writing). I already used has() in a previous post - that Strava CSS hack, but I’m finding it useful in so many places.

For example, in Val Town we have some UI that shows up on hover and disappears when you hover out - but we also want it to stay visible if you’ve opened a menu within the UI. The previous solution required using React state and passing it through components. The new solution is so much simpler - just takes advantage of Radix’s excellent attention to accessibility - so if something in the UI has aria-expanded=true, we show the parent element:

.valtown-pin-visible:has([aria-expanded="true"]) {
  opacity: 1;
}

Thoughts on storing stuff in databases #

  • User preferences should be columns in the users table. Don’t get clever with a json column or hstore. When you introduce new preferences, the power of types and default values is worth the headache of managing columns.
  • Emails should probably be citext, case-insensitive text. But don’t count on that to prevent people from signing up multiple times - there are many ways to do that.
  • Most text columns should be TEXT. The char-limited versions like varchar aren’t any faster or better on Postgres.
  • Just try not to use json or jsonp, ever. Having a schema is so useful. I have regretted every time that I used these things.
  • Make as many things NOT NULL as possible. Basically the same as “don’t use json” - if you don’t enforce null checks at the database level, null values will probably sneak in eventually.
  • Most of the time choose an enum instead of a boolean. There is usually a third value beyond true & false that you’ll realize you need.
  • Generally store times and dates without timezones. There are very, very few cases where you want to store the original timezone rather than store everything in UTC and format it to the user’s TZ at display time.
  • Most tables should have a createdAt column that defaults to NOW(). Chances are, you’ll need it eventually.

Hiding Peloton and Zwift workouts on Strava #

I love Strava, and a lot of my friends do too. And some of them do most of their workouts with Peloton, Swift, and other “integrations.” It’s great for them, but the activities just look like ads for Peloton and don’t have any of the things that I like about Strava’s community.

Strava doesn’t provide the option to hide these, so I wrote a user style that I use with Stylus - also published to userstyles.org. This hides Peloton workouts.

@-moz-document url-prefix("https://www.strava.com/dashboard") {
    .feed-ui > div:has([data-testid="partner_tag"]) {
        display: none;
    }
}

How I write and publish the microblog #

This microblog, by the way… I felt like real blog posts on macwright.com were becoming too “official” feeling to post little notes-to-self and tech tricks and whatnot.

The setup is intentionally pretty boring. I have been using Obsidian for notetaking, and I store micro blog posts in a folder in Obsidian called Microblog. The blog posts have YAML frontmatter that’s compatible with Jekyll, so I can just show them in my existing, boring site, and deploy them the same way as I do the site - with Netlify.

I use the Templater plugin, which is powerful but unintuitive, to create new Microblog posts: key line is

<% await tp.file.move("/Microblog/" + tp.file.creation_date("YYYY[-]MM[-]DD")) %>

This moves a newly-created Template file to the Microblog directory with a Jekyll-friendly date prefix. Then I just have a command in the main macwright.com repo that copies over the folder:

microblog:
  rm -f _posts/micro/*.md
  cp ~/obsidian/Documents/Microblog/* _posts/micro

This is using Just, which I use as a simpler alternative to Makefiles, but… it’s just, you know, a cp command. Could be done with anything.

So, anyway - I considered Obsidian Publish but I don’t want to build a digital garden. I have indulged in some of the fun linking-stuff-to-stuff patterns that Obsidian-heads love, but ultimately I think it’s usually pointless for me.

awesome-codemirror #

I started another “awesome” GitHub repo (a list of resources), for CodeMirror, called awesome-codemirror. CodeMirror has a community page but I wanted a freewheeling easy-to-contribute-to alternative. Who knows if it’ll grow to the size of awesome-geojson - 2.1k stars as of this writing!

Make a ViewPlugin configurable in CodeMirror #

ViewPlugin.fromClass only allows the class constructor to take a single argument with the CodeMirror view.

You use a Facet. Great example in JupyterLab. Like everything in CodeMirror, this lets you be super flexible with how configuration works - it is designed with multiple reconfigurations in mind.

Example defining the facet:

export const suggestionConfigFacet = Facet.define<
  { acceptOnClick: boolean },
  { acceptOnClick: boolean }
>({
  combine(value) {
    return { acceptOnClick: !!value.at(-1)?.acceptOnClick };
  },
});

Initializing the facet:

suggestionConfigFacet.of({ acceptOnClick: true });

Reading the facet:

const config = view.state.facet(suggestionConfigFacet);

A shortcut for bash using tt #

I heavily use the ~/tmp directory of my computer and have the habit of moving to it, creating a new temporary directory, moving into that, and creating a short-lived project. Finally I automated that and have been actually using the automation:

I wrote this tiny zsh function called tt that creates a new directory in ~/tmp/ and cd’s to it:

tt() {
    RANDOM_DIR=$(date +%Y%m%d%H%M%S)-$RANDOM
    mkdir -p ~/tmp/"$RANDOM_DIR"
    cd ~/tmp/"$RANDOM_DIR" || return
}

This goes in my .zshrc.

Get the text of an uploaded file in Remix #

This took way too long to figure out.

The File polyfill in Remix has the fresh new .stream() and .arrayBuffer() methods, which aren’t mentioned on MDN. So, assuming you’re in an action and the argument is args, you can get the body like:

const body = await unstable_parseMultipartFormData(
  args.request,
  unstable_createMemoryUploadHandler()
);

Then, get the file and get its text with the .text() method. The useful methods are the ones inherited from Blob.

const file = body.get("envfile");

if (file instanceof File) {
   const text = await file.text();
   console.log(text);
}

And you’re done! I wish this didn’t take me so long.