Every project like this ends with a set of opinions you did not have before. Here are mine.
n8n
I will be charitable and say: n8n is excellent for linear workflows of three to five nodes. Trigger, action, done. For anything more complex, it becomes something I would describe, with some restraint, as binary toxic waste.
Visual workflow tools have an inherent problem: the visual representation is the code. You cannot refactor it the way you refactor code. You cannot diff it sensibly. You cannot review it in a pull request. When a node is producing wrong output and you need to understand why, you are clicking through a canvas, unfolding nested expressions, reading JavaScript embedded in a UI field that was not designed to hold much JavaScript.
At fifty-plus nodes, my n8n workflow was not just difficult to manage. It was not possible to hold a mental model of it. I had to decompose it into three sub-workflows, each of which was still harder to work with than an equivalent Python script would have been.
The verdict: n8n for simple linear automations only. Webhook comes in, format it, send it somewhere. Perfect. Anything with branching logic, error handling, loops, and external API state: write proper code.
OpenTofu and Ansible
Both are powerful tools that reward the time invested in learning them properly. OpenTofu’s declarative model is learnable quickly, and the state management is genuinely useful. Ansible is more complex, more verbose, and requires more patience. Neither is a tool for someone who wants to click something and have it work.
What they give you in return is reproducibility. The ChromaDB container I provisioned is documented in code. If I destroy it, I can rebuild it. If I want another one, I run the same code. That is the value. For a homelab operator who has been doing everything manually through web UIs, it is a real step change.
The T410
My ThinkPad T410 has been my development machine for longer than I care to admit. It has survived multiple OS reinstalls, spent several years as a door stop, been revived, been used for experiments that its designers certainly did not anticipate, and generally conducted itself with dignity.
This week it reached the end of its useful life as an active development machine. Not because it failed — the T410 runs Debian and did everything it was asked, reliably and correctly. Just very slowly. The n8n canvas took long enough to load that workflow iteration became genuinely painful. Communicating with the LLM, round after round, with the latency of an old machine and a slow connection, added up. It worked. It was just too slow to be practical.
Separately: I tried to get MCP servers running on it — Model Context Protocol, a standard for giving AI direct access to local systems, files, and APIs. The various documentation pages make this look straightforward. The T410 runs Debian. The AI spent several hours trying to get the first server to start. Dependency conflicts, version mismatches, configuration that looked correct and was not, servers that launched and then silently failed. At the end of the session, nothing was reliably working. This was not a hardware problem. MCP installation is simply harder than it looks, and the available documentation does not always match reality.
The T410 will not be thrown away. I am genuinely fond of this machine. It earns a place of honour next to its older sibling — a T41p, the real one, with the IBM logo — which has been with me even longer and has its own mythology. The T410 did nothing wrong. It served faithfully for years. It simply ran out of road.
The Gemini anecdote
Late in the week, having exhausted my patience with n8n, I vented to Gemini and explained the situation. Gemini, after listening sympathetically, suggested that perhaps Python would have been the better tool for this job.
I found this mildly infuriating, given that the same Gemini had, at the start of the week, told me n8n was my friend. But fine. It then started designing a Python microservices architecture for the whole thing. We spent the rest of the session on that. Then Gemini’s token allowance ran out. No code was actually written.
I have saved this anecdote because it is a perfect illustration of the week as a whole: good tools, genuinely useful, with limitations that only become apparent once you are too committed to change course.
What is next
The main workflow runs. New documents are processed. The harvester keeps the vector database current. The broad goal, a system that handles document classification without constant manual intervention, is closer to reality than it was a week ago.
But the n8n architecture is wrong. I know it is wrong. The right implementation is Python microservices: clean code, proper error handling, unit tests, a data model I control. The n8n workflow will be rebuilt. Properly this time.
There is also more to write about. The BI dashboard for the financial data sitting in MariaDB. The gold standard feedback loop, where manually verified metadata flows back into ChromaDB and improves future classifications. The infrastructure automation that could be extended to other parts of the homelab.
The week was chaotic, expensive relative to the budget, and produced something that half works rather than something finished. The storagepath problem is still open. The Python rewrite has not started.
And yet: I learned how agentic coding actually works. Not from a blog post. From doing it. I know now what AI can carry, what it drops, when to decompose, when to push through and when to stop. That understanding is not theoretical any more. By that measure — the actual goal of the week — it was a complete success.
I learned more in seven days than I had in the previous several months of reading about AI tools without using them. That was always the point.
The learning journey continues. If you are following along, the lernreise tag will keep you current. And if you have thoughts, corrections, or better ideas: this blog has a name for a reason.
← Lernreise 6/7: What AI Actually Can (and Cannot) Do
Lernreise 7/7 of 7. Follow the lernreise tag for the full series.