Bobadilla v. The Robot

On Amelia Bedelia, explicit instructions, and what AI is actually good for.


If you've read Amelia Bedelia with a seven-year-old recently, you already understand artificial intelligence better than most executives I've met.

Amelia Bedelia is a housekeeper who follows instructions with absolute precision and zero inference. She is told to "dress the chicken" and puts a small outfit on it. She is told to "draw the drapes" and produces a pencil sketch. She is told to "put out the lights" and hangs them on the clothesline. Every instruction is executed perfectly. Every result is completely wrong. Not because Amelia Bedelia is incompetent. Because nobody told her what they actually meant.

This is AI. Not the dystopian version, not the magical version, just the real one. A tool that does exactly what you tell it to do, nothing more, nothing less, with no ability to infer what you actually wanted from what you actually said. The gap between those two things is where everything goes wrong.

I've been using AI as a primary working tool for long enough now to have a clear picture of what it is, what it isn't, and what it takes to use it well. The answer is less complicated than the discourse around it suggests. It's also more demanding than most people expect.

What AI Actually Is

Here's my honest assessment.

AI is tireless, infinitely patient, available at any hour, and genuinely skilled at replicating a voice and tone when given enough examples and explicit direction. It can hold a large amount of context, synthesize information across sources, and produce a first draft of almost anything faster than any human working alone. It has allowed me to accomplish more in the early months of building this firm than I would have been able to without it. Not because it's smarter than the people I'd otherwise be working with. Because it's efficient in a way that compounds quickly when you know how to use it.

It's also only as good as the briefing it receives.

The best professionals I've worked with over the years weren't the ones who knew the most. They were the ones who asked the best questions before they started. They understood that the work product was only as good as their understanding of the goal, the constraints, the audience, and the specific definition of success for that particular situation. When I gave them complete information they produced excellent work. When I assumed they had context they didn't have, the work was technically correct and practically useless.

AI works the same way, with one critical difference. A good professional will eventually ask a clarifying question. AI will not. It will produce something that looks complete, sounds authoritative, and may be missing the one piece of context that would have changed everything. Confidently and without apology.

That's not a flaw. It's just Amelia Bedelia. She dressed the chicken. You told her to.

What Explicit Instructions Actually Look Like

Most people use AI the way they'd search Google. Short query, vague intent, hoping for the right answer. That works fine for simple factual questions. It falls apart immediately for anything complex, contextual, or requiring judgment.

What I've learned, through a lot of trial and error and a fair amount of frustration, is that briefing AI well requires the same discipline as briefing any skilled professional. You need to provide context, not just the question. You need to explain what success looks like, not just what you want. You need to share constraints, preferences, prior decisions, and relevant background. You need to be explicit about what you don't want as much as what you do.

When I'm using AI to draft content, I don't say "provide me 3 writing prompts about NDAs." I explain the audience, the voice, the structure, any specific thoughts I've got on the topic, the examples I could include, the things I want avoided, and what the final post needs to accomplish. I share examples of my existing writing. I explain my professional preferences. I tell it what I've already decided so it doesn't have to guess.

When I'm using it for financial modeling, I walk through every assumption explicitly and explain the if-then logic in full. I can't say "model the wind-down scenario" and assume it understands that I can't simultaneously sell a piece of equipment and count it as income-generating. It doesn't know that unless I say it. And if I don't say it, I'll get a model that is mathematically coherent and operationally wrong.

The mistakes I see most often aren't the tool hallucinating facts or going off the rails. They're quieter than that. Wrong use of a term. A forgotten decision from earlier in a conversation. A missed logical constraint that seemed obvious because it was obvious to me and I didn't think to say it out loud. Amelia Bedelia, every time.

Where It Actually Goes Wrong

I want to be specific about this because the discourse tends toward extremes. Either AI is infallible and transformative or it's a liability and a fraud. Neither is accurate.

The real failure modes are mundane. They're the em dash that keeps reappearing after you've asked three times for it to stop. They're the math that produces a correct answer on the first pass and goes haywire on a second pass because the parameters shifted and nobody flagged it. They're the confident summary that uses a term slightly incorrectly in a way that a non-expert would never catch but an expert winces at.

These aren't catastrophic. They're the kind of errors a good editor catches, which means AI works best when the person using it is capable of being that editor. If you can't recognize when the output is wrong, you're not ready to rely on the output. That's not a reason to avoid the tool. It's a reason to develop your own expertise before you delegate to it.

This is also why I think the legal profession's collective anxiety about AI is somewhat misplaced. The question isn't whether AI will replace lawyers. It's whether lawyers who use AI well will have a meaningful advantage over lawyers who don't. The answer to that question is already yes, and the gap is widening.

The Programmatic Angle

Here's the conversation I think founder-led businesses should be having and almost none of them are.

AI is already in your business whether you've made a decision about it or not. Your employees are using it. Your vendors are using it. Your competitors are using it. The question isn't whether to allow it. The question is whether you've thought clearly about how you want it used and communicated that to your team.

Most businesses haven't. They're either ignoring it entirely or leaving it to individual employees to figure out, which means everyone is developing their own approach with no shared standards, no quality controls, and no alignment to the company's values or risk tolerance.

A smart AI policy for a small founder-led business doesn't have to be complicated. It can be as simple as a few clear rules that reflect your specific situation. In an accounting function, maybe the rule is we do our own math, meaning AI can help with research and drafting but not with calculations that inform financial decisions. In a content function, maybe it's AI writes first drafts only, meaning nothing goes out without a human edit and a human sign-off. In a client-facing role, maybe it's we don't use AI to respond to client communications without disclosure.

The specific rules matter less than the fact that you've made intentional decisions and communicated them clearly. That's the difference between a business that uses AI as a tool and a business that's being used by it.

The Advantage of Going Early

The real advantage of early AI adoption is not where most people think it is.

It's not about having access to better tools. The tools are largely available to everyone. It's not about being faster, though that's a real benefit. It's about getting comfortable enough with the tool to develop your own style and your own judgment before the landscape consolidates around someone else's approach.

The people who waited are now learning under pressure, trying to catch up while also doing their actual jobs. The people who overclaimed early are easy to spot. Their content sounds like AI. Their outputs are unedited. Their voice disappeared somewhere in the process. The sweet spot is the person who went in early, made the mistakes, learned what works, and figured out how to use the tool in a way that amplifies their own thinking rather than replacing it.

That's what I've been doing. Using AI to think faster, draft quicker, model scenarios more thoroughly, and explore ideas I wouldn't have had time to explore otherwise. Not outsourcing the thinking. Accelerating it.

The voice in everything I publish is still mine. The ideas are still mine. The judgment about what's right and what's wrong in any given output is still mine. AI is the tool. I'm still the lawyer.

What This Means For Your Business

If you're a founder who hasn't started using AI yet, start. Pick one thing you do repeatedly that involves writing, research, or analysis and try using AI to do the first pass. See what it gets right. See what it gets wrong. Learn to brief it the way you'd brief a very capable, very literal assistant who has never worked in your industry and knows nothing about your specific situation.

If you're already using it, think about whether you've made any intentional decisions about how it fits into your operations or whether you've just been improvising. The improvisation phase is fine for learning. At some point it needs to become a practice.

And if you're a founder with employees, think about what Amelia Bedelia does when there are no instructions at all. She improvises. She does her best. She produces something that is technically a result and may or may not be what you needed.

Give her better instructions. She's very good at following them.

- m


Morgan Bobadilla, Esq. is the founder of Understory Advising PLLC. She spent over a decade as an in-house General Counsel and Director across aerospace, defense, manufacturing, banking, and staffing, building deep expertise in commercial contracts, regulatory compliance, export controls, employment law, and enterprise risk. She now brings that experience directly to founder-led businesses through retainer and flat-fee engagements.


Previous
Previous

Bobadilla v. The Paper Trail

Next
Next

Bobadilla v. The NDA