Skip to main content
Tim DisneyTim Disney

MCP is neat

Jotting down an aha! moment I had about how MCP can augment the utility of LLMs in some of the things I do every day.

The background is I've been trying to be more intentional about committing things to memory using Anki and spaced repetition. I was inspired years ago by Michael Nielsen's article about using SRS to read and remember ideas from papers so that's my main goal: commit interesting ideas from papers to memory.

When I first read that article I tried to build an Anki routine but fell off—the activation energy required to add cards was always too high (also writing good prompts is hard).

LLMs are really good at summarizing and restructuring text so I wondered if they could help with writing Anki cards. I'm not sure if this is the right approach long-term since writing cards by hand would almost certainly aid in the learning process. That said, it turns out I couldn't sustain the manual effort so I think it's worth experimenting to see if a different approach could help.

So my approach is to use an LLM to help write the cards. MCP then lets me easily connect the LLM directly to Anki and even further reduce the activation energy required for authoring cards.

I'm using Raycast, but the basic idea works in any MCP-capable LLM client (e.g. Claude desktop). The MCP server I'm using is anki-mcp-server, which requires Anki and the AnkiConnect plugin. Wiring up the Anki MCP server to Raycast is super straightforward and took me all of five minutes.

For articles, there are a few kinds of prompts that seem to work. If the article is well-known and in the model's training set, you can simply say:

suggest some anki cards for the article "As We May Think" by Vannevar Bush

Claude 4 responds with something like:

Basic Information Cards

Card 1:

Front: Who wrote "As We May Think" and when was it published?

Back: Vannevar Bush, published in The Atlantic Monthly in July 1945

Core Concepts

Card 2:

Front: What is the "Memex" as described by Vannevar Bush?

Back: A hypothetical electromechanical device that would store books, records, and communications, allowing users to retrieve and link information through associative trails

...

For newer or less well-known articles, you can paste the entire article or a URL (if the model supports web browsing). Not every suggested card will be useful, sometimes the model suggests trivia I don't care about, but it's easy to go back and forth with the model to pick the best ones. You can also add some pre-prompt instructions to guide the model toward the kinds of cards that are most helpful, but honestly, it's quick enough to just pick out the good ones.

Once I have the cards I like, it’s a simple instruction to add them to Anki:

now add those cards to @anki

(Raycast uses the @ notation to add an MCP server to a chat, but you can also provide it as a chat preset.)

Raycast also provides a nice feature to wrap commonly used prompts in AI commands. For example, I’ve added an “ankify” command with a prompt like:

from the article in {browser-tab}, suggest some anki cards to help me remember the salient points. when referencing the article use a short version of its title and link to it

This does what it sounds like: it provides the currently focused browser tab as context to the model. I can then check the suggested cards and follow up by telling it to add them to my Anki deck.

I think it's pretty neat that thanks to MCP we can now wire up these powerful capabilities to our LLMs in a matter of minutes.


New Things

Big changes happening in my life. After almost a decade, today was my last day working at Shape Security / F5.

It was a really good run. I had the chance to work with incredible folks building wild things on the web and I think we make it a little safer and more convenient for everyone.

Now it's time for something completely different. Not entirely sure what that is yet but I'll figure out something fun soon.


Large AI Models Are Cultural and Social Technologies

Large AI Models Are Cultural and Social Technologies. The way LLMs are often framed is as "intelligent agents" but maybe this is the wrong framing. Rather we should view them as "cultural and social technologies" (like the printing press or markets) that are "allowing humans to take advantage of information other humans have accumulated".

Not as snappy or exciting in a science fiction sort of way but more accurate without being dismissive of their impact on society.


Zed Edit Prediction Feature

Zed now predicts your next edit with Zeta, our new open model. Played around with Zed's edit prediction feature today and it worked surprisly well.

I like this feature because it sits on the spectrum betwen "just chat with Claude" on the one hand and "describe what Cursor should do" on the other. It augments your process rather than trying to do it entirely.

Side note: it sounds like the voice over in their video is an AI voice and I really don't like it. I don't understand why they did that, I would much prefer to hear their "unprofessional" developer voices.


Hypermedia Controls - From Feral to Formal

Hypermedia Controls: From Feral to Formal. An interesting paper that tries to locate and formalize a set of core primitives in hypermedia systems as expressed in HTMX. It identifies a "hypermedia control" as consisting of four mechanisms: (1) an element that (2) responds to an event trigger by (3) sending a network request and (4) placing the response in at some position in the viewport. By enhancing a hypermedia system with primitives that allow you to manipulate each of those mechanisms you can declaratively extend the system with your own hypermedia controls.

An example they give:

<button hx-trigger="click" hx-post="/clicked" hx-target="#output">
  Issue a request
</button>
<output id="output"> </output>

When the user clicks on the button the system will issue a network request to /clicked and place the response in the <output id="output"> element.

This is interesting in so far as it goes but I'm not convinced that the "hypermedia maximalist" approach is really all that great of a way to develop systems.


We're Getting the Social Media Crisis Wrong

We're Getting the Social Media Crisis Wrong. It's not about misinformation, it's about groups with collective misunderstandings:

The fundamental problem, as I see it, is not that social media misinforms individuals about what is true or untrue but that it creates publics with malformed collective understandings.

I really like this subtle shift in perspective. It aligns with my distrust of social media as distorting people's behavior around chasing engagement but he actually identifies a more specific issue:

The more important change is to our beliefs about what other people think

Our beliefs and opinions about the world are influenced by what we think other people think and social media is a (distorted) machine that tells us what other people think.


Decentralized Systems Aren't

Decentralized Systems Aren't. Centralized systems will always layer on top of decentralized systems unless you figure out how to fix the underlying economic problem of increasing returns to scale.

To actually get a permissionless decentralized system you need: