Skip to content

Don't Migrate, Harmonize: A Practical Setup for Claude + Codex Skills

Published:
Don't Migrate, Harmonize: A Practical Setup for Claude + Codex Skills

Over the last few weeks, I have found myself switching between Claude Code and Codex. With Claude Code, it’s great fun to start a project. When you go deeper into the trenches, Codex has the sharper axe. It’s much slower than Claude Code, but often the first solution from Codex just works, even when Claude Code has tried a few times already.

Also I wanted to try out these lines from a Go expert for my AGENTS.md file on both agents. Especially the part with no longer leaving behind build artifacts would be much appreciated.

I found some information about migrating skills and other central files from the cloud to Codex, but I couldn’t find a good way to harmonise them. The following is the result of a coding session. It’s a description of the architecture I ended up with and the small set of rules that make it work.

Please note that my skills usually consist of an Markdown file and a bash script. The Markdown file establishes the correct parameters to send to the script. These depend heavily on the environment we’re running on and what we actually want to do. I also found it to be a much nicer interface than having to come up with parameters myself on the CLI. I also make the agents responsible for supervising the script and checking that the output is as intended. This is also helpful for me and much faster than I could do it myself.

The Architecture

Single source of truth per skill:

skills/
  treeos-release-version/
    SKILL.md
    run.sh
  • SKILL.md is the canonical instruction file.
  • run.sh is the canonical script (short name, always in the same place).

Claude entry point (symlink):

.claude/commands/treeos-release-version.md -> ../../skills/treeos-release-version/SKILL.md

Claude still sees its command file, but it’s just a symlink to the real skill file. No duplication.

Codex entry point (symlink):

~/.codex/skills/treeos-release-version -> /path/to/repo/skills/treeos-release-version

Codex loads from its own central skills directory. So the solution is the same: symlink the skill folder.

Why This Works

  • No drift. You only edit skills/<name>/SKILL.md.
  • Both agents stay in sync. One file, two entry points.
  • Short script names. Everything is run.sh, which keeps paths readable.
  • Easy to audit. Every skill is a folder; you can scan it with your eyes.

A Few Tricks That Matter

1) Use SKILL.md as the canonical file. Codex is strict about frontmatter. It expects name and description in YAML at the top. That’s your single most common failure mode.

2) Don’t duplicate scripts. If you have scripts inside .claude/commands, they’ll drift. Move them into skills/<name>/run.sh and reference that in the instructions.

3) Claude supports symlinks. You don’t need a generator script. Symlink the .md files and keep the content in one place.

4) Mac is case-insensitive. You cannot have both skill.md and SKILL.md. Pick SKILL.md and stick with it.

Installing Skills for Codex

Codex only loads from its own directory, so you need to link your skills into it. I added a tiny helper:

./scripts/install-codex-skills.sh

It walks through skills/* and symlinks each one into ~/.codex/skills.

That’s the only “install step” you need. Re-run it when you add a new skill.

The Result

  • Claude sees the skills through .claude/commands symlinks.
  • Codex sees the same skills through ~/.codex/skills symlinks.
  • You edit one file, everything stays consistent.

I’ve implemented this setup fully in the TreeOS repo — you can see the exact structure, symlinks, and helper script here: ontree-co/treeos.

Just Ralph it, baby!

Published:
Just Ralph it, baby!

This year started with a lot of beef. I gave a talk last year titled ‘The Claude Code Wars’, and it seems that we have now entered the second episode.

This is where it all began, roughly six months ago. Geoffrey Huntley wrote a blog post about how Ralphing works. It’s based on a simple idea: Pick a prompt, then let coding agents run on this prompt for as long as necessary, or until you are happy with the result. This approach became a meme, solving the “human-in-the-loop” problem that agents have today by brute-forcing the approach and burning tokens.

Fast forward to January and Gas Town by Steve Yegge is released. It’s a multi-agent orchestration system with the following architecture diagram:

Gas Town Architecture

If that sounds complicated, that’s because it is. You have one main agent to communicate with: the mayor. The mayor then commands a team of different roles to build your project. Apparently, people are using multiple Claude Max $200 plans simultaneously with this approach. It instantly became very popular.

Gas Town was probably built using a similar approach. You can see in the Commits graph on GitHub that an absurd number of commits were created in a short amount of time.

Gas Town Commit Graph

Although Gas Town is more elaborate than Ralphing, the promise is the same: building software doesn’t require any skill anymore. It can be completely replaced by brute-forcing it with average agents. Give them enough time and your project will surpass any project written by humans.

On the other side of the arena are people with a deep commitment to software engineering who are trying to combine the power of their trained brains with that of agents. Peter’s post, Shipping at Inference-Speed, is an excellent summary and illustrates the extent to which standard software engineering practices have diverged in the last year.

Armin’s post delves deeply into the topic and is well researched. For example, Beads is an issue tracker created by the makers of Gas Town that uses 240k lines of code to track issues in a simple format.

Highly interesting Popcorn Time 🍿, watching everything from the sidelines? I don’t think so.

We’re seeing very different ways of using agents to assist with software project development. Unless you take the totally agnostic stance of not using agents at all, it’s impossible not to position yourself on some side here.

I’m investing my time and money in enhancing my existing software development skills with agentic tools like Claude Code or Codex. I don’t want to be completely replaced by brute force approaches. Fingers crossed! 🤞

The Case for TreeOS, Part 1

Published:

In December, I released the first beta version of my small operating system, TreeOS. It is freely available on GitHub and is accompanied by a B2B solution called OnTree.co that lets you run internal or private AI-powered apps and skills 100% locally.

I am not happy with the software quality yet, but looking back, this is not surprising. When I started working with agents, it made sense to start with the most extreme approach: writing the entire code base with agents. Unsurprisingly, agentic engineering is very different from classical software engineering. I have learned a lot, as you can see in the result. I summarized my learnings in a conference talk I held yesterday in Berlin, and probably in one of our upcoming Agentic Coding Meetups in Hamburg as well. I also have a much clearer picture of how to improve the code base.

This is the first in a series about the reasons that led me to build TreeOS.

The LLM Gap

There is a gap between what current computers and smartphones can do locally and what is possible in the cloud with massive GPUs. Here is a rough classification:

Mobile models (2B-8B)

There are some excellent models in this size class, often highly specialized for one use case. For example, I use these models for speech-to-text or text-to-speech locally. Modern smartphones have up to 16 GB of memory, so the operating system and these models can run in parallel. From a battery perspective, it is important that these models only run for short periods of time, or your battery drains too fast.

Local models (8B-32B)

This is the classic local model class. There are a huge number of optimized models. Users of these models typically have a powerful gaming graphics card with 16-32 GB of RAM. This allows them to use their CPU normally and run long-running tasks on the graphics card.

Cloud models (200B and up)

The exact sizes of the major cloud models are not disclosed. One of the leading open-source models is Kimi K2 Thinking, which has 1 trillion parameters. This equates to a requirement of roughly 250 GB of RAM. Besides a maxed-out Mac Studio with 500 GB of RAM, I am not aware of any off-the-shelf hardware solution that can run these models locally.

The gap (32B-200B)

This gap is not receiving much attention at the moment, but I am optimistic that this will change in 2026. We have already seen that mixture-of-expert models, such as GPT-OSS 120B, are highly capable and can run at reasonable speeds. Also, computers in this class can run a 32B model and hold a lot of context. This is important for agentic workflows, where the agent must build a mental map of the problem, whether it is code-related or not. However, as this memory also needs to be stored in graphics RAM, consumer graphics cards are not suitable for these workloads.

With the AMD AI 300 and 400 series, affordable machines are now available that are ideally suited to long-running asynchronous tasks.

The first bet of TreeOS is that this niche will grow in 2026.

Two Book Recommendations: Sorry, No Happy Endings!

Published:
Two Book Recommendations: Sorry, No Happy Endings!

Two Book Recommendations: Sorry, No Happy Endings!

In 2026, I will cover a wider range of topics beyond AI and 3D printing. To kick off the new year, I want to share two books that have reshaped my thinking over the past year. Unfortunately, one is only available in German. The other is available in both English and German.

Deutsche Militärgeschichte by Stig Förster

I read a review of this book in Süddeutsche Zeitung. As I’m not at all a fan of the military, it took me a while to download the sample to my Kindle app and start reading. At school, I never had a good history teacher and I always hated the subject. I mostly remember it as a boring game of memorising years without any connection between them, which my brain didn’t want to remember. Like most brains, mine works better if there’s a story, connection and reasoning included.

This book also mentions years, but it’s not at all about exact dates. It’s much more about the context of society at that time and the dynamics that often inevitably led to military conflicts. Stig Förster believes that the military is part of human organisation and that violent force is recurring, whether we want it or not. Having read it, I tend to agree, which unfortunately makes the outlook on the next decade grim.

Interestingly, the book covers the period from 1525 to 2025, continuing up to the present day and including the current conflict with Russia. Although I found the final chapters the most interesting, I found the entire 1,294-page book a surprisingly easy read. Although I studied World War I and World War II extensively at school, the book provided me with many new insights into these periods. For example, I learnt that the Nazi regime knew as early as 1941 that they would ultimately lose the war and adjusted their goals accordingly.

Buy as an ebook at Thalia and you get an EPUB without any DRM measures.

How Countries Go Broke, The Big Cycle by Ray Dalio

Another book that I initially resisted starting to read. Ray Dalio is a major hedge fund manager. What could I possibly learn from a greedy finance guy whose only purpose in life is to maximise their own profits?

Quite a lot, as it turns out. It’s also very much a history book, albeit a financial history book. The central claim is that economies go through a major cycle every 50 to 150 years. Our cycle began in 1950, after the Second World War, and is now in its final phase. The author doesn’t just make this claim, but also presents extensive data from previous cycles, arguing that this knowledge is largely hidden because most people experience it only once in their lifetime.

He also offers a fascinating perspective on China, presenting it as more than just the evil state it is often portrayed as. It was a much harder read than Stig Förster because it contained a lot of information about the financial system that I hadn’t come across before. If you’re ever curious about macroeconomics, whether because of Bitcoin, debt, or what bonds are, you’re in for a treat.

Buy here at Thalia or here at Amazon, both times with DRM.

I wish you a great start to 2026!

A Personal Note: And No, It’s Not The End!

Published:
A Personal Note: And No, It’s Not The End!

It’s the week before Christmas. You’re probably feeling the pressure to live up to all the expectations that Christmas brings each year. I hope you can let a few of them slide and choose to have a good time for a few hours in between.

From the outset, I have treated my newsletter The Liquid Engineer as an experimental playground. I started out with lots of motivation for 3D printing, and I am still convinced it’s a hugely powerful technology that will be adopted much more widely in the coming years. Then, even more interesting things happened: I switched my focus to agentic coding. In recent weeks, I’ve also incorporated a lot of content on local LLMs and the necessary software and hardware for this. This is because I’m trying to build a business in this area with OnTree.

Thank you for following me on this journey so far!

I realized that I write my best posts when I write about topics that interest me. I will continue to write this newsletter, but I won’t be bound to any particular topics in the future. It might be about something that has just happened or something that I am currently working on. If you joined me for the AI content, I invite you to stick around over the next few weeks to see if you can still find value in the content. Since AI is keeping me busy, I expect there will still be plenty of AI-related content. If there’s a topic you’d like to read about, just hit reply and let me know!

For the past few months, I have also been publishing the newsletter posts right here, on my personal blog. If you prefer this method, feel free to dust off your old RSS reader and subscribe to my RSS feed!

Have a great Christmas with your loved ones!