Skip to main content

Legal

AI Content Policy

Last updated: March 2026

ANDI's AI content policy is built on a core architectural principle: the model that reasons about your business is not the model that generates language. ANDI uses a proprietary Business Concept Model (BCM) for structured reasoning and uses language models only for translating those conclusions into natural language. This separation is what prevents hallucination in enterprise AI.

This AI Content Policy describes how Zamora AI SRL ("Zamora," "we," "our," or "us") designs, operates, and governs the AI capabilities within ANDI, our business operating system for revenue-generating companies. This is not a generic AI disclaimer. It is a precise account of how ANDI's AI architecture works, what we have committed to, and what we will never do.

1. Our Philosophy: AI That Reasons, Not Guesses

Most AI tools in the enterprise market work the same way: they pass your data into a large language model and return whatever the model generates. This is fast to build and impressive to demo. It is also unreliable in production, because language models are designed to produce fluent text, not defensible business reasoning. Fluency and accuracy are not the same thing.

ANDI is built on a different premise. We believe AI in a business operating context must meet three standards that raw LLM prompting cannot reliably satisfy: it must reason from structured evidence, it must be fully traceable, and it must know the boundaries of what it knows. Our architecture enforces all three.

The result is an AI system that can be held accountable. Every conclusion ANDI surfaces is grounded in a defined reasoning framework, linked to the specific signals that informed it, and scored for confidence. When ANDI is uncertain, it says so. When a signal is missing or ambiguous, it surfaces that fact rather than filling the gap with plausible-sounding language.

2. How ANDI Uses AI

ANDI's AI operates through a two-layer architecture. Understanding this separation is essential to understanding why ANDI's outputs are trustworthy.

The Business Concept Model (BCM) is ANDI's reasoning layer. The BCM is a structured framework that maps incoming business signals to named concepts: customer health, pipeline momentum, churn risk, revenue retention, expansion readiness, and others. It does not generate language. It evaluates evidence, resolves conflicts between signals, and produces structured conclusions about the state of the business. The BCM is deterministic where determinism is warranted and probabilistic where uncertainty exists. Its logic is auditable and versioned.

The language layer sits above the BCM. Once the BCM has reached a conclusion, a language model translates that conclusion into readable prose. The language model does not reason. It does not have access to raw customer data. It receives the BCM's structured output and renders it in natural language. This separation is what prevents hallucination. The model that speaks is not the model that reasons -- a language model cannot fabricate conclusions that the BCM has not already validated, because it never sees the inputs. It only sees the output.

This architecture is governed by the SAGE methodology (Strategic AI Guidance and Execution). SAGE defines how ANDI's AI components are designed, tested, deployed, and updated. It establishes the standards for signal quality, reasoning validity, confidence calibration, and human escalation. ANDI's AI behavior is not the product of ad hoc prompt engineering. It is the product of a repeatable methodology that can be audited.

3. Signal Processing, Not Data Collection

ANDI does not copy your customer data. This distinction matters and we want to be precise about it.

When ANDI connects to a customer's CRM, product analytics platform, support system, or financial data, it ingests signals and metadata. A signal is a structured, interpretable indicator: a customer's product usage trend, a support ticket resolution time, a contract renewal date, a pipeline stage velocity. ANDI processes these signals through the BCM. The underlying records, documents, and raw datasets remain in the customer's environment. They are not transferred to Zamora's infrastructure and they are not stored in ANDI's data layer.

This is a meaningful architectural commitment, not a policy aspiration. Signal processing, not data collection is how the BCM is designed to operate. It does not require raw records to function. This means the attack surface for customer data exposure is fundamentally smaller than in systems that copy and store customer datasets.

Customers retain full ownership of their data at all times. ANDI's access is scoped, permissioned, and revocable.

4. Data Isolation and Security

ANDI is built on a zero trust security model. No request is implicitly trusted. Every request is authenticated, authorized, and scoped at the time it is made. Permissions do not persist beyond the session or operation for which they were granted.

Tenant isolation is enforced at every layer of the stack. Customer signals, BCM state, reasoning context, and language model inputs are isolated by tenant. There is no shared context across customers. A query from one customer cannot surface signals, conclusions, or artifacts from another customer's environment.

Break glass protocols govern emergency access. When support or engineering personnel require access to a customer environment to diagnose a critical incident, that access is logged in full, time-limited to the duration of the incident, and requires multi-party authorization. No standing access to customer environments exists. Break glass events are reported to the customer upon closure.

For customers with elevated data residency or sovereignty requirements, ANDI supports a BYO VPC (Bring Your Own VPC) deployment model. In this configuration, ANDI's processing infrastructure runs within the customer's own cloud environment. Signals are processed and the BCM operates entirely within the customer's network boundary. No signal data leaves the customer's VPC.

5. Explainability and Confidence

Every recommendation ANDI surfaces includes two elements that generic AI tools do not provide: a confidence score and a citation of the signals that informed the conclusion.

Confidence scores are produced by the BCM based on signal completeness, signal recency, and signal consistency. A high confidence score means the BCM had strong, recent, and consistent evidence. A low confidence score means one or more of those conditions was not met. Confidence scores are not cosmetic. They are the BCM's honest assessment of the evidentiary basis for its conclusion. ANDI surfaces low-confidence findings rather than suppressing them, because a weak signal that is visible is more useful than a strong-sounding claim that conceals its own uncertainty.

Cited signals allow users to trace every recommendation back to the specific business data that generated it. If ANDI assesses a customer as high churn risk, the user can see exactly which signals drove that assessment: declining product engagement over the last 30 days, an unresolved P1 support ticket, and a contact who has not responded in three weeks. The reasoning is not a black box. It is a readable chain of evidence.

This explainability is not an optional reporting layer added on top of an opaque model. It is native to the BCM architecture. The BCM reasons in structured terms that are inherently inspectable. The language layer renders those terms in prose, but the underlying structure is always available to the user.

6. Human Oversight

ANDI is a decision-support system. It is designed to make human judgment faster, better-informed, and less dependent on incomplete information. It is not designed to replace human judgment.

Consequential business decisions, including decisions about customer relationships, revenue strategy, and organizational priorities, require human review before execution. ANDI does not take autonomous actions on behalf of customers in these domains. It surfaces recommendations with the evidence and confidence context needed to evaluate them. The decision remains with the person.

Our internal governance process mirrors this principle. Before any new AI capability is deployed, it is evaluated against accuracy benchmarks, stress-tested for edge cases, and reviewed for potential to produce harmful or misleading outputs. The SAGE methodology defines these evaluation standards. Capabilities that do not meet the bar are not shipped.

Customers can flag any ANDI output as incorrect, misleading, or inappropriate. Flagged outputs are reviewed by our product and AI teams. We treat these reports as a signal about BCM reasoning quality, not just individual anomalies, and they feed into our ongoing model governance process.

7. What We Will Never Do

These are explicit, unconditional commitments. They are not qualified by "unless required by law" or "without your consent" where the underlying action is architecturally excluded.

  • We will never use customer signals to train, fine-tune, or improve models for any other customer. Customer data is processed to serve that customer. It does not cross tenant boundaries under any circumstances.
  • We will never use customer signals to train general-purpose foundation models or improve our base AI infrastructure without explicit, written consent from the customer.
  • We will never expose one customer's signals, BCM state, or reasoning context to another customer. Tenant isolation is enforced at the architectural level. It is not a configuration setting that could be misconfigured.
  • We will never execute autonomous decisions in consequential domains without a human review step. ANDI recommends. Humans decide.
  • We will never represent AI-generated content as human-authored. All AI outputs are identified as AI outputs within the platform.
  • We will never suppress low-confidence findings to make outputs appear more certain than they are. Confidence scores reflect the actual evidentiary state. They are not adjusted for presentation.

8. Bias Awareness

We are honest about the limitations of ANDI's AI. No system that reasons from historical business signals is fully immune to the biases encoded in those signals. If a company's historical data reflects patterns of under-investing in certain customer segments, ANDI will reason from that history unless those patterns are surfaced and corrected.

The BCM architecture reduces some categories of bias risk that affect raw LLM systems. Because the BCM reasons from explicit, named signals rather than learned statistical associations, its reasoning is more inspectable and its failure modes are more identifiable. But inspectability is not immunity.

We test BCM reasoning outputs across different business contexts, customer profiles, and industry verticals before deployment. We monitor for systematic patterns in flagged outputs that could indicate reasoning bias. We maintain a feedback channel specifically for customers who believe ANDI's outputs reflect unfair or systematically skewed reasoning.

Bias mitigation is a continuous program, not a pre-launch checklist. We do not claim to have solved this problem. We claim to be working on it rigorously and to be transparent when we find it.

9. Updates to This Policy

ANDI's AI architecture evolves as we improve the BCM, expand signal coverage, and develop new capabilities. When those changes have material implications for how customer data is processed or how AI outputs are generated, we will update this policy and notify customers via email and in-platform notification before the changes take effect.

The "Last updated" date at the top of this page reflects the most recent revision. Prior versions of this policy are available upon request. We welcome substantive questions about our AI architecture and governance from customers, researchers, and regulators. You may also review our Privacy Policy for information about how we handle personal data more broadly.

10. Contact

For questions about ANDI's AI architecture, to report an output you believe is incorrect or biased, or to discuss our responsible AI program in the context of your organization's requirements, contact our AI team directly:

  • Email: hello@zamora.ai
  • Zamora AI SRL, Bucharest, Romania

Reports of harmful, inaccurate, or unexpected AI behavior are treated as high-priority issues. We review them within two business days and follow up with the reporting customer on our findings and any corrective action taken.

Common Questions