
There is no job posting that lists “prompt engineering” under required qualifications. No onboarding checklist covers it. No annual review grades employees on it. Yet across every department – finance, operations, customer service, compliance, marketing, HR – the quality of how employees communicate with AI is already shaping the quality of their work output.
Sabeer Nelli, CEO of Zil Money, argues this gap between how widely AI is being used and how poorly it is being guided represents one of the most under addressed organizational risks in business right now. His position is unambiguous: prompt engineering is not a technical skill belonging to developers or data teams. It is a professional skill, as universal and foundational as written communication, that every employee at every level must deliberately cultivate.
At Zil Money – processing over $100 billion in transactions for US businesses – the pattern Sabeer observes is consistent: organizations that treat prompting as someone else’s responsibility consistently underperform those that treat it as everyone’s baseline competency.
One Tool, Every Department, Wildly Different Results
The same AI model accessed by two different employees produces vastly different outputs – not because the technology differs, but because the quality of instruction does. A customer service representative asking AI to “help respond to a complaint” receives a generic draft. One who specifies the customer’s issue, the company’s resolution policy, the desired tone, and the preferred response length receives something ready to send. A marketing employee asking AI to “write a campaign idea” gets surface-level output. One who frames the request with audience parameters, competitive context, and format requirements gets a genuine strategic starting point.
The technology is not the variable. The instruction is. Sabeer’s concern is that most organizations are deploying AI broadly while training for it narrowly – if at all. The result is an enterprise-wide skill gap hiding behind a veneer of AI adoption.
The Framework That Works Across Every Function
The five-principle framework Sabeer advocates – role, context, constraints, format, and reasoning – is deliberately department-agnostic. Role assigns AI a defined functional identity before any task begins, whether that is a reconciliation specialist, a process efficiency analyst, or a talent acquisition advisor. Context provides the situational background that separates relevant guidance from generic commentary. A compliance officer who references NACHA guidelines and a specific transaction threshold gets a materially different response than one who simply asks about “payment rules.”
Constraints prevent AI from filling undefined space with assumptions – every department benefits from explicit scope boundaries. Format specifies how output should arrive, whether a checklist, a structured summary, or a numbered action list. Reasoning chains the logic sequentially, making AI’s conclusions auditable rather than opaque. Together, these five principles convert AI from an unpredictable tool into a reliable one. What changes across departments is the content fed into the framework, not the framework itself.
The Hiring Signal Sabeer Watches Closely
Sabeer’s conviction around prompting fluency has begun reshaping how Zil Money thinks about talent. In a recent interview, he made his position direct: during a candidate assessment task, if someone reaches for Google, that is a meaningful signal. If they reach for an LLM instead – and use it well – that is the person worth hiring.
It is a sharp but logical filter. How someone instinctively chooses to find an answer reveals how they will work. Google retrieves what already exists. A well-prompted LLM helps a person think, structure, analyze, and produce – which is what modern roles demand. This perspective sits at the core of Zil Money’s identity as an AI-first company.
Building prompting fluency organization-wide requires deliberate process design. Standardized prompt templates mapped to recurring tasks remove the burden of constructing quality prompts under time pressure. An iteration culture – where the first AI output is treated as a diagnostic rather than a deliverable -raises the standard of what teams accept. And embedding those templates inside tools employees already use daily ensures adoption happens in context, not in theory.
Sabeer Nelli’s broader argument is straightforward: the organizations that extract compounding value from AI are not necessarily those with the most sophisticated tools. They are those where every employee, regardless of role, knows how to ask a precise question. That is not a specialist skill. It is the new professional baseline.



