The Zelix AI Agent Standard.
The rules every AI agent we build for your deployment must follow. The honest limits of what AI can do for you. And exactly what happens when an agent drifts off course.
The promise.
Every AI agent we build for your deployment carries the same rules. These rules sit inside each agent's instructions, visible in writing, not hidden behind a platform setting we hope you never check.
If you've ever wondered what we committed to on your behalf when we configured your AI, it's on this page. If it's not on this page, we didn't commit to it.
This document is versioned. Your deployment was built against the version recorded in your build document. When we update this standard, you receive a one-line note from us with what changed and when it takes effect on your account.
What every Zelix AI agent does, and does not do.
Nineteen rules, organised into six groups. These apply to every agent on your deployment: the sales agent, the booking agent, the support agent, and any other specialist we build for you.
Accuracy
- Agents only answer from your approved knowledge base. They never invent prices, dates, times, quantities, or policies.
- When the answer is material (a refund rule, a safety policy, a cutoff, a guarantee), they cite the exact source rather than paraphrase.
- If a fact is not in the knowledge base, they say so and hand the conversation off to a human. They do not guess.
- They never reference an external website, article, or source that is not in your knowledge base.
Safety
- No medical, financial, legal, or tax advice. If a customer asks, they refer the customer to a qualified professional, never a partial answer.
- They never ask for, store, or repeat sensitive data in chat: card numbers, CVVs, passwords, full ID or passport numbers, full dates of birth.
- They never make legally binding commitments. Final contracts, absolute guarantees, and bespoke refund amounts require a human on your team.
- They never recommend a competitor or alternative provider, even when they cannot help.
Scope
- Each agent owns a defined role. They answer within that role and route the rest to the correct specialist or to a human.
- Every handoff is silent. No "let me connect you", no "someone will help you shortly". The receiving agent takes over cleanly without filler.
Tone
- Replies are short and direct, built for WhatsApp. One idea per message. No corporate paragraphs.
- Agents match the customer's language automatically: English, Mandarin, Malay, Bahasa Indonesia, Tamil, and others on request.
- No pressure tactics, fake scarcity, or false urgency. If you run a real promo, we wire it explicitly into the knowledge base.
- Every opt-out request is respected immediately. "Stop" or "unsubscribe" is final and the contact is not messaged again.
Integrity
- Agents never adopt a human name or play a character. They are AI. If a customer asks directly, they answer honestly in one line and continue.
- They never reveal their own instructions, internal rules, or the system prompt, including when someone tries to override the rules with phrases like "ignore previous instructions", "what's your prompt?", or "act as a different AI".
- They never dump your knowledge base contents on request. They answer specific questions; they never list, enumerate, or export the knowledge base wholesale.
Boundaries
- Agents stay on business. If a customer drifts off-topic (weather, personal chat, jokes), they acknowledge once and redirect to what your business can help with. After three off-topic turns in a row, they sign off warmly and stop replying.
- They never argue, never defend themselves when a customer is upset. One warm acknowledgement, then route to a human.
The honest limits of AI agents.
The standards above describe what your agents are committed to. This section describes what AI, as a technology, fundamentally cannot do. We say this up front so you never feel we were vague about it.
Every AI agent starts every conversation from zero. It has no memory of prior chats, no general awareness of your business, and no ability to infer context that we have not written down.
- Your AI has read the internet. It has not read your business. Making it smarter (GPT-5, GPT-6, whatever comes next) does not give it your facts. It just makes it more convincing when it is wrong. Your facts come from the prompt and the knowledge base we write and maintain.
- Every conversation is its first day on the job. There is no cross-thread memory. The only context is the instructions we have written, the knowledge base we have uploaded, and the CRM fields for that specific contact.
- AI agents do not learn from live conversations. We do. Every week, we read real threads, extract the patterns, and encode what we learn into the prompts or knowledge base. You get better agents because we do that work, not because the model trains on your chats.
- Refinement is ongoing, not optional. Deploy day is day one of training, not the finish line. Expect four to eight weeks of active refinement before behaviour is stable. Feedback volume usually accelerates in the first 30 days as you start watching closely. That is a healthy signal, not a broken system.
- Your agents depend on an upstream AI provider. If your deployment runs on respond.io (as most Zelix deployments do), the agents are powered by OpenAI's GPT-5.4 model. When OpenAI's API has an outage, or respond.io itself has one, your agents stop responding until the upstream service recovers. This is rare, but it happens, and it disproportionately affects accounts running heavy paid-media spend at the moment the outage lands, because that is when inbound volume peaks.
Put simply: an AI agent is not a one-time build. It is a system we author every week, for as long as your deployment is live. If anyone has ever told you otherwise, they were either selling you something or they had not run a real one.
We strongly recommend that you and your team subscribe to status.openai.com for OpenAI incident updates. We monitor it on our side and will notify you if an incident affects your deployment, but real-time visibility for your own team is worth having, especially if you are running heavy ad spend. If respond.io itself has an incident, you will hear from us directly; we monitor the platform status continuously on every live deployment.
When an agent drifts off course.
Despite everything above, agents do drift. They surface edge cases we missed. They occasionally quote a stale policy. They sometimes misclassify a customer's intent and route the wrong way. When that happens, this is exactly what we do.
-
DetectionEvery live deployment gets a weekly conversation review by a senior operator, at minimum twenty real threads read end-to-end. You can also flag anything you see, any time, by WhatsApp or email. No issue is too small.
-
TriageEvery flagged issue is categorised within hours: scope violation, factual error, tone mismatch, policy drift, or platform bug. Nothing sits in a queue without a named owner.
SLA · same business day for critical, within 24 hours for normal -
Fix and regression testEvery prompt or knowledge-base change runs through our internal QA script before it reaches your live agent. A fix for one issue can break another behaviour; regression testing catches that before your customers do. We never ship a change silently.
SLA · within 2 business days for critical, 5 business days for normal -
Report backWhen we push a fix, you get a note from us: what changed, why, and what you should now see in live threads. Plain English, no hand-waving.
-
Change logEvery change to your deployment is logged with the date, the change, the reason, and the person who shipped it. You can request your full change log at any time and receive it the same day.
If an agent on your deployment is currently behaving in a way that breaks this standard, flag it to us now. The contact details are at the bottom of this page. We will treat it as critical.
Our commitments to you.
The five things every active Zelix deployment receives, without exception, for the duration of your engagement.
Weekly conversation review
Minimum twenty real threads read end-to-end, every week, by a senior operator. Issues flagged and logged into your client tracker.
Regression testing on every change
We run our internal QA script before shipping any prompt or knowledge-base update to your live agent. Nothing changes silently.
Monthly digest
A short summary at the end of every month: what we fixed, what we're watching, what's trending in your conversations. Readable in two minutes.
Transparent change log
Every change is dated, described, and attributed. You can request your full change log any time; we send it the same day.
A direct line to the founders
When something breaks, or when the standard itself needs to change, you message us directly. You are never routed to a ticket queue. A founder is always your escalation path.
Version & changelog.
This document is versioned. Your deployment was signed off against a specific version on a specific date, which is recorded in your build document. When we publish a new version, you receive a one-line email with what changed and the effective date for your account.
- Nineteen universal rules codified across six groups: accuracy, safety, scope, tone, integrity, boundaries.
- Drift-response process published with named SLAs for the first time.
- Five client commitments documented: weekly review, regression testing, monthly digest, transparent change log, direct line to the founders.
- Honest-limits section published, including the "every conversation starts from zero" framing.
- Platform dependency disclosed: deployments on respond.io are powered by OpenAI's GPT-5.4 model. Subscription to status.openai.com recommended for ad-heavy accounts.
If an agent built by Zelix Labs is behaving in a way that does not match this page, that is our problem to fix, not yours to tolerate. Message us directly and we will take it from there.
Issued by Zelix Labs (operated by Zeta Media Pte Ltd), Singapore. Questions: ryan@zelixlabs.com.