What Happened at the First Agentic Cafe
We held the first Agentic Cafe meeting online this week — a small group of technologists, consultants, and business owners who are all trying to figure out the same thing: how do you actually work with AI agents day to day?
Who was in the room
We had software engineers, fractional CTOs, a marketing consultant for the construction industry, a technology consultancy owner with nearly 30 years in emerging tech, an ERP and systems integration firm, an aerospace AI startup founder, and a few people running development shops. Experience ranged from people who’ve been writing code for 40 years to someone who picked up their first AI coding tool two months ago and is now hooked.
One person had literally started a new principal engineer role three days earlier — inheriting a massive healthcare codebase with zero tests — and came looking for practical guidance. Another had just landed a job after a long sabbatical, partly wondering whether the job he used to do even exists anymore. (It does, but it looks different now.)
What tied everyone together was this shared experience: sometime late last year, something shifted. AI coding tools stopped feeling like fancy autocomplete and started feeling like you could just tell them what to build. Several of us have canceled our IDE subscriptions. A few admitted they rarely touch code directly anymore.
What we actually talked about
The nuts and bolts of agentic coding. We got into the weeds on Claude Code workflows — plan mode being way too verbose, how to keep your rules and skills files concise enough that the model actually follows them, the emerging hierarchy of Claude MD files versus skills versus commands versus plugins. One member manages entire projects through markdown files in Obsidian rather than any traditional PM tool. Another is all-in on Gemini and made a compelling case for Google’s tooling ecosystem, which sparked a friendly debate that’ll probably continue for months.
“We have no tests at all.” This turned into one of the most useful threads. The group walked through practical approaches — Playwright for browser automation, having an agent audit code against written standards before generating tests, using Google’s Anti-Gravity IDE for quick scaffolding. The advice that landed: don’t start by writing tests. Start by writing specs for what the thing is supposed to do, then let the agent review the code and generate tests from there.
Personal tools vs. enterprise platforms. There’s a real split happening between tools like Claude Code and Goose that run on your machine with access to your files, and platforms like Crew AI that are designed for governed, cloud-based agent orchestration. One member is actively advising clients on this and noted the tension: personal tools are seeing explosive adoption because they’re easy, but enterprises need observability and guardrails that those tools don’t provide yet. We didn’t solve this, but the conversation surfaced the right questions.
Not getting locked in. This came up repeatedly. One member is building all their agentic workflows through Goose specifically so they can swap between Anthropic, OpenAI, Gemini, or local models. Another is centralizing all intelligence in their agent layer rather than inside ClickUp, so a future switch to Jira wouldn’t mean rebuilding from scratch. The consensus: keep the brains outside the tools.
Automating the meeting-to-action-item loop. A consultancy owner laid out a vision that resonated with everyone: take the weekly meeting cadence most teams already follow, and automate the secretarial layer — turning action items into tickets, matching tasks to team members based on their profiles, running sentiment analysis on client conversations, and surfacing proactive coaching flags based on institutional knowledge of what tends to go wrong. Not replacing the project manager, but giving them a much better-informed starting point.
The documentation wall. One of the more sobering stories: a mid-market company with thousands of retail locations wants desperately to deploy AI agents but has virtually no documentation of how any knowledge work gets done internally. You can’t hand tasks to an agent if nobody has ever written down how to do them. This led to a broader conversation about the unglamorous prerequisite work — documenting processes, writing SOPs, codifying what people actually do versus what their job descriptions say — that has to happen before agents can take over tasks.
Treating agents like interns, not employees. Someone framed it as an advisor sitting on your shoulder — listening, offering suggestions, but not acting on its own until you say so. Another member pushed back gently: the real question is what permissions you give the intern and what guardrails you put in place, because the intern will absolutely ignore your safety instructions if you give it an overwhelming enough task. Old-fashioned access controls — service accounts, role-based permissions — might matter more than prompt engineering when it comes to keeping things safe.
Skills as the unit of work. One consultancy is converting all their internal workflows into Claude skills — not to replace people, but to make jobs easier and more consistent. The group debated whether agents should replace roles or tasks. The landing point: tasks are the right unit for now. Roles are messy bundles of disparate responsibilities that happened to get packaged together by company history, and trying to replicate a whole role with an agent is a recipe for failure.
What’s coming next
Our next meeting will build on the foundation laid this week. Planned discussion points include:
- Metrics that matter – How to measure the impact of AI-generated tickets (e.g., reduction in email volume, cycle-time improvements) and surface early signals of bottlenecks.
- Governance & soft-skill transfer – Designing prompts and response patterns that embed empathy, escalation handling, and constructive coaching without over-automating.
- Tool-migration playbook – A step-by-step approach for moving a team from ClickUp to Jira (or another PM system) while keeping the external agent layer intact.
- Expanding the knowledge base – Best practices for curating markdown files, updating embeddings, and maintaining “parent-child” relationships between high-level concepts and detailed procedures.
- Broader AI-agent ecosystem – Keeping an eye on emerging LLM providers (Gemini, Claude, etc.) and ensuring our architecture remains plug-and-play.
We’ll also reserve time for participants to share their own experiments — whether that’s a custom prompt library, a new integration, or a change-management challenge they’re facing.
How to join
- When: We’ll send out a poll soon to set a date for the week of Feb 23rd.
- Where: Video call (Google Meet – link will be emailed after registration)
- Cost: None
- What you get: A live discussion, the opportunity to ask questions and contribute to a community that is testing AI-agent ideas in real projects.
If you’re building in this space and want to compare notes, come say hello.
We’re keeping the group small enough for everyone to speak and focused on Agentic topics, but open to anyone who wants to explore these ideas without a sales pitch. Feel free to forward this note to colleagues who might be interested.
Looking forward to seeing you at the next session.