Since mid-2023 I've been using AI. First out of curiosity, then daily, then hourly. Over 2000 hours. 50,000 pages in the last year alone. Some weeks 60 hours. This isn't a hobby — it's how I live.
Not for "write me an email." For everything. Furniture construction. Code architecture. Decisions. Pattern recognition. Emotional processing. Content. This website. This text.
The results I get look completely different from what most people know of AI. Not because the AI is better. But because the problem was never the AI.
This is the biggest mistake I see. Someone enters a prompt, gets an answer, and takes it. Done. "AI is just kind of mediocre."
No. You stopped before it got good.
When I get an answer, I check it. Not whether it's "good" — whether it's RIGHT. Whether it hits what I mean. Usually not the first time. So I say specifically what's off and where it should go. Not "do it again" — but: "This is too distant. I want it more personal, because the reader should feel like they were there."
Sometimes just: "That's not real." Because I feel it in the first line when an answer performs instead of reflects.
And then again. And again. Until it lands.
Imagine you want to buy a laptop. You go to a website and search "laptop." 50,000 results. Useless.
Now you set filters. 14 inches. Minimum 16GB RAM. 512GB SSD. Under 1500 CHF. Rating above 4 stars. Suddenly 12 results. And among them is exactly what you're looking for.
Prompts work the same way. The more filters you set, the more precise the result.
"Make me a website" — that's the search without filters.
"Make me a website that's noble, modern, and elegant. Subtle animations. It should live, breathe, give impulses. No framework — pure HTML, CSS, and JavaScript. Dark mode as default. Sound design on hover and click. The cursor should have a trail. And an Easter egg with the Konami code."
Those are 12 filters. The result is a completely different website.
This is the difference between someone who uses AI and someone who builds with AI.
Most people hand in a specification. "Build me a folder structure with five subfolders." The result is technically correct and completely lifeless.
What I do: I say what I want and WHY. Not the technical details — the principles. "I want every agent to know me immediately. Understand my operating system. And the system to maintain itself." That's a completely different brief. The AI fills in the WHAT. I provide the WHY and the TASTE.
Exactly how I build furniture. A friend didn't say: "Build me a piece of furniture with five levels and Blum Movento full-extension runners." He said: "I listen to vinyl and don't know where to put the sleeve." The rest I heard, observed, and thought about for six months.
AI works the same way. Don't give it the solution. Give it the problem and the taste. The result will be better than anything you imagined — because the AI sees possibilities you don't, but still builds according to your principles.
AI is a mirror. What you throw in comes back amplified. Put in shallow stuff, get shallow stuff back.
Most people open a chat and start from zero. Every time. The AI knows nothing — who you are, how you think, what you like, what you hate. So it answers generically. For everyone. For no one.
What I do: I built a system. Not a note collection — a living operating system that ensures every AI instance I open already knows me. My patterns, my projects, my rules, my decision logic. Within seconds a new session has completed an onboarding that would take a human weeks.
Two layers. One for the raw — journaling, reflection, WHOOP data, daily logs. Everything close to the moment. The second distills the raw into something a machine can process immediately. Compressed, connected, actionable. Not "yesterday someone spent 30 minutes on the website and gave the following feedback" but: strategic context, one sentence.
The system maintains itself. Clear rules for what goes in, what gets removed, when to split, when to archive. No agent can freestyle. Everyone reads the same playbook.
And it grows. Every session makes it denser. Every denser system makes the next session better. Better sessions produce better insights. Better insights make the system denser. Flywheel.
I can't write well. Not in the traditional sense — sit down, form sentences, produce linear text. My head works too fast, too parallel, too many layers at once. Pressing that into text doesn't work.
But I can share. Ideas, thoughts, connections — they're there. Just not in written sentences.
So I use AI as a translator. I share what I mean — in my words, in my order, with my jumps. AI shapes that into a structure others can understand. And then I check: is that right? Is that me? No — specific feedback. Yes — it stays.
My best friend read the texts on my website and said: the Florian described there — I know him. Exactly like that.
That's not coincidence. That's the result of hundreds of corrections over months. The AI didn't learn how I sound — I learned to express myself through AI.
There are two ways to use AI.
The first: you give an assignment, the AI executes, you get a result. Like an employee. "Write me a blog post." "Fix this bug." Transactional. The result is generic — it could be for anyone.
The second: the AI knows your operating system. It knows you decide sensorially. That you build systems instead of using willpower. That your voice has short sentences, rhythm over grammar. That you identify bottlenecks and eliminate them.
When you tell a generic agent "write a blog post" you get a blog post. When you tell YOUR agent "write a blog post" it knows your voice, your rules, your anti-patterns. The result sounds like you. Not like AI.
Same model. Same hardware. But a completely different operating system running on it.
This is the most important point. And the one that has the least to do with AI.
No tool in the world can help you if you don't know what you want, how you sound, what feels right. You can write the best prompts, give the most context, use the most expensive AI — if you don't know who you are, you get generic answers. Because you ask generic questions.
I know how my head works. Four operating systems, one interface. I know how I decide — four filters. I know what feels right and what doesn't, before I can explain why. And all of that is in every prompt I write. Not explicitly. Implicitly. In HOW I phrase the assignment, WHO I am is already embedded.
The AI doesn't just read what you say. It reads how you say it. And from that it builds something that fits you — or doesn't.
Self-knowledge is the operating system. AI is the app that runs on it.
The tool performs as well
as the person using it.
And I don't accept
a mediocre tool.