Blog

Lernreise 6/7: What AI Actually Can (and Cannot) Do

Lernreise 6/7: What AI Actually Can (and Cannot) Do

I want to write this post carefully, because the nuance matters and most things written about AI productivity are not careful.

The AI tools I used this week were remarkable and frustrating in roughly equal measure, at different times, for different reasons. Both things are true. Neither cancels the other.


Start with the remarkable.

The wiki documentation was worth the experiment on its own. Every piece of infrastructure I provisioned, every workflow component I built, ended up documented in the Gitea wiki in language that a human could read and learn from. Not command logs. Actual explanations: what was built, why this approach was chosen, what to watch out for. This is documentation that would never have existed if I had done the work alone, because I am the kind of person who documents things enthusiastically on day one and then never again.

The harvester workflow, from idea to running code, took less than a day. The ChromaDB setup, OpenTofu configs, Ansible playbook, container provisioned. That would have taken me several days of reading documentation, making mistakes, reading more documentation, and making different mistakes. The AI compressed that substantially.

The consistency across a complex context was also impressive. Give it a network inventory, a naming convention, a set of constraints, and it applies them reliably. I am the one most likely to forget what I decided three days ago. The AI is not.

The Mistral OCR on bad receipts. The batch embedding pipeline. These worked almost immediately, with minimal adjustment. That is a genuine time saving.


Now the frustrating parts. They are important.

There is no fire and forget. This was the biggest lesson, and I say it having spent a week trying to do exactly that. The AI cannot be pointed at a problem and left to solve it. It needs architectural guidance, domain context, and constant review. When I let the main workflow grow to fifty-two nodes without intervention, I created a problem that was harder to fix than if I had caught it at twenty.

Architecture knowledge is still yours. The AI will build what you describe. It will not tell you that what you are describing is unmaintainable until it is already unmaintainable. Divide and conquer, decompose complex systems, test incrementally: these are software engineering fundamentals that I had to state explicitly. The AI did not volunteer them.

Domain knowledge is not transferable. The Paperless-NGX API, the Jinja template storage paths, the ID-based metadata system: the AI did not know these things, and it did not know that it did not know them. It guessed, with confidence, and it was wrong. The only fix is reading the documentation yourself and then telling the AI what it needs to know. There is no shortcut there.

Context windows are finite. Past a certain point of complexity, the model starts contradicting itself. It re-introduces errors it just fixed. It loses track of decisions made three messages back. This is a real limitation, not a hypothetical one. The solution is the same as it is for humans: write things down, decompose the problem, do not try to hold everything in working memory simultaneously.


The honest summary:

AI is not a junior developer. A junior developer — however inexperienced — brings implicit professional instincts: divide the problem, test incrementally, flag what you do not know. You do not need to spell these out. They come with the person. With AI, none of this is implicit. If you do not state it explicitly, in atomic prompts, it does not happen. The AI will build the fifty-two-node monolith without complaint, because nobody told it not to. It will skip the storage path, because nobody asked. It will guess at integer IDs, because it does not know that it does not know them. The gap between “looks finished” and “is correct” is where most of the week went.

AI makes specialists more productive by removing the friction of implementation. A specialist with AI is faster, more consistent, and better documented than the same specialist without it.

AI makes generalists capable of things they could not attempt alone. This week I used OpenTofu, Ansible, ChromaDB, RAG pipelines, Mistral’s embedding API, and n8n’s webhook system. I could not have moved as fast without AI assistance. But the architecture decisions, the domain understanding, the debugging judgement: those were mine. The AI was the implementation layer. I was still the designer.

The real ROI is not in the AI itself. It is in the automation that the AI helps you build faster. The harvester that runs on every document upload and keeps the vector database current. The workflow that will process new documents without me touching them. Those automations exist. Building them took a week instead of months.

That is where the value is.


← Lernreise 5/7: Day 3: Fifty Nodes and a Burning Budget  ·  Lernreise 7/7: n8n, a Dead ThinkPad, and What’s Next →

Lernreise 6/7 of 7. Follow the lernreise tag for the full series.