FAQ

Common buyer questions, answered conservatively.

Use this FAQ to understand what Nivorqa Labs is, how module maturity and ML-ready positioning work, and which public boundaries apply before qualified review.

Public boundary

The short version before the details.

The FAQ preserves the same public posture as the rest of the site.

Claim boundaries

Do not read the site as open access or proof.

  • No public source-code download.
  • No free sandbox.
  • No public pricing.
  • No buyer records on public previews.
  • ML-ready and ML-assisted language requires buyer-side validation.
  • No guaranteed ROI, legal advice, accounting/payment/compliance guarantee or validated predictive ML accuracy claim.

Buyer FAQ

42 answers for first-pass qualification.

These answers are intended for public buyer review before email, not as commercial terms or implementation commitments.

Positioning

What the public site is, and what it is not.

What is Nivorqa Labs?

Nivorqa Labs is a controlled buyer-review surface for construction workflow source-package offers.

The public site helps qualified software buyers review module fit, maturity route, integration assumptions, synthetic previews, commercial boundaries and next-step paths before deeper disclosure.

Are these SaaS products?

No. The public positioning is not a self-serve SaaS trial, marketplace, checkout funnel or demo portal.

The public route is qualified review of construction workflow source-package offers and proposal-stage module scoping.

Are these source-code modules?

They are public module routes for source-package review and proposal-stage scoping.

Source-package depth is controlled and depends on qualification, commercial fit, agreement and NDA where appropriate.

Access

Boundaries around source, sandbox, pricing and public materials.

Can I download the source code publicly?

No public source-code download is provided.

Deeper source-package review only follows qualification, agreement and NDA where appropriate.

Is there a sandbox?

No free sandbox or public demo portal is provided.

The public site uses synthetic previews and static review pages to help qualified buyers assess fit before deeper disclosure.

Is there pricing on the site?

No public pricing is published.

Commercial structure can be discussed only after module fit, maturity route and intended use are qualified.

Previews

How to interpret screenshots and product surfaces.

Are the screenshots real?

No buyer records are shown in public screenshots.

Public screenshots are static product mockups designed to show workflow shape without buyer records, customer proof or production proof.

Are the previews synthetic?

Yes. Public previews are synthetic product previews.

They are useful for reviewing module scope and workflow objects, but they are not a live buyer environment or production demonstration.

Module Maturity

Which public modules are primary, and which are scoped under proposal.

Which modules are review-ready?

The review-ready source-package offers are: Change Order & Claims Intelligence, Project Risk Intelligence, Subcontractor Cost Control & Margin Leakage.

Review-ready means positioned for controlled buyer diligence, not public source access or automatic deployment.

Which modules are proposal-stage?

The proposal-stage modules are: BOQ / Cost Intelligence, Tender Comparison & Award.

BOQ / Cost Intelligence and Tender Comparison & Award remain available under proposal and should be scoped before any deeper source-package discussion.

Intelligence Layer and ML Readiness

How to read ML-ready and ML-assisted language safely.

What is the Nivorqa Intelligence Layer?

The Nivorqa Intelligence Layer is the public architecture language for deterministic workflow logic, construction data signal extraction, ML-assisted review support, explainability, auditability and governance boundaries.

It helps buyers understand how module outputs could support review and prioritization while keeping buyer-side validation required and human-in-the-loop review explicit.

Is the AI layer trained on our contract data?

Not through the public site, and no buyer contract data should be uploaded here.

Claims Pro AI can be discussed around provider configuration, buyer permission, golden cases and evaluation design, but any use of buyer data requires controlled discussion and explicit data-sharing terms.

What accuracy claims do you make?

No claim outcome accuracy, probability of success, validated predictive ML accuracy claim or guaranteed recovery is made publicly.

Claims Pro eval harness metrics can be discussed by task as methodology validation: clause extraction precision/recall and macro F1, narrative parsing agreement and analogous search Recall@K. Buyer-side validation required, and there is no public production model performance claim.

Where can we review Claims Pro AI evidence?

The public Claims Pro AI evidence page explains contract clause extraction with verifiable citation, change event narrative structuring, analogous case semantic search, audit logging and eval harness metrics.

Eval reports and model cards can be reviewed only through controlled technical review or due diligence after qualification. They distinguish methodology validation from service-mode model evaluation; buyer-side validation required, and the public site does not expose source code, source files or ZIP downloads.

What does Claims Pro AI actually do?

Claims Pro AI is presented as an evidence-gated review path for AI-assisted extraction, AI-assisted structuring and semantic search.

The bounded tasks are contract clause extraction with verifiable citation, change event narrative structuring and analogous case semantic search, with human verification required before buyer action.

The deterministic baseline remains responsible for evidence completeness, grouping, timeline reconstruction support and open-action tracking.

Is Claims Pro AI production-validated?

No production validation claim is made publicly.

Without controlled v4 evidence confirming implementation_and_eval_present, the conservative public posture is Claims Pro AI evaluation path.

AI-augmented positioning requires controlled technical review, service-mode evaluation available during controlled technical review and buyer-side validation required before operational reliance.

What does the eval harness measure?

The eval harness is task-level methodology validation, not production model validation and not outcome-level proof.

It validates schema, metrics, gates and governance for clause extraction precision/recall by clause type, macro F1, narrative parsing agreement and Recall@K for analogous case search. Buyer-side validation required.

Provider/model performance is measured separately through service-mode evaluation available during controlled technical review; those metrics do not claim claim outcome prediction, probability of success, legal advice, guaranteed recovery or validated predictive ML accuracy as a public claim.

Do the reported eval metrics prove production AI accuracy?

No. Eval harness metrics are methodology validation for schema, metrics, gates and governance; they do not prove live model accuracy or public production model performance.

Model/provider behavior is evaluated separately when a service-mode configuration is available during controlled technical review.

Buyer-specific validation requires buyer-curated goldens or agreed evaluation cases, and production monitoring is a separate post-integration step.

What is AIExtractionRun / AI audit logging?

AIExtractionRun is the public-facing concept for a traceable AI review record where AI is enabled under controlled review.

The audit log concept includes prompt version, model id, input hash, cost, latency, output reference and reviewer action where available.

Audit logging supports explainability and review traceability; it does not create autonomous legal, commercial or procurement decisions.

Can you run it with our provider/key?

Configurable provider or buyer provider key can be discussed only through controlled technical review.

The public site does not collect keys, credentials, files, contract data or live client data.

Provider configuration depends on buyer security constraints, data residency, logging expectations and evaluation design.

Can it run locally?

Local-mode deployment can be discussed as a controlled technical review topic.

The public site does not claim plug-and-play AI, production-ready AI or automatic local deployment.

A serious review should cover provider constraints, deployment assumptions, audit logging, buyer validation and fallback behavior.

Does analogous case search predict outcome?

No. Analogous case semantic search is review support for finding similarity candidates.

It does not predict claim outcome, probability of success, entitlement, legal position or recovery.

A human reviewer must decide whether a returned case is useful for analysis or should be ignored.

What happens if the AI provider fails?

Claims Pro remains deterministic-first.

If an AI provider fails, is unavailable or is not approved by the buyer, the workflow falls back to deterministic evidence review, grouping, timeline reconstruction support and open-action tracking.

Provider failure should be reviewed as part of buyer governance, audit logging and technical review.

What is the deterministic fallback?

The deterministic fallback is the rule-based workflow path that remains reviewable without AI output.

For Claims Pro, that means evidence completeness review, missing-evidence signals, evidence grouping, timeline reconstruction support and open-action tracking.

It helps the module stay auditable even when AI output is rejected, unavailable or outside buyer governance.

Why are Risk and Subcontractor not AI-first?

Project Risk and Subcontractor remain deterministic-first because their public value is the auditable baseline: scoring logic, signal extraction, review states and buyer-controlled validation.

They expose tenant-specific ML upgrade paths only after buyer data readiness, labels, evaluation design and human review ownership are clear.

The public site does not claim AI ships today for those modules.

Why is Claims Pro the AI evaluation path?

Claims Pro is the AI evaluation path because extraction, narrative structuring and semantic search can be scoped as bounded review-support tasks with citation and audit expectations.

Without controlled v4 evidence confirming implementation_and_eval_present, the public site does not claim completed AI implementation or production validation.

Project Risk and Subcontractor remain deterministic-first modules with documented tenant-specific ML upgrade paths after data readiness criteria are met.

Can the AI layer run with our provider/key or local mode?

Configurable provider, buyer provider key or local-mode deployment can be discussed only in a controlled technical review.

The public site does not collect provider keys, credentials, files, contract data or live client data.

Is the ML validated?

No validated predictive ML accuracy claim or validated supervised ML claim is made publicly.

Any evaluated ML-assisted pattern would require buyer data, labels, evaluation design, acceptance thresholds and buyer-side validation before operational reliance.

What does ML-ready mean?

ML-ready means the workflow data can be structured for future evaluation, review queues, signal extraction or candidate ML-assisted patterns.

It does not mean the model is universally validated, production-ready or guaranteed to improve outcomes.

What buyer data is required for validation?

Validation requires buyer-approved historical workflow data, module-relevant objects, timestamps, completeness notes, data quality context and security or sharing constraints.

Where evaluated ML-assisted patterns are considered, buyer-approved labels and target outcome definitions are also required.

Can Nivorqa train models on our data?

Not through the public site, and no buyer data should be uploaded here.

Any buyer-specific model training or evaluation would require controlled discussion, explicit permission, data-sharing terms, security review, label review and agreed validation design.

Does the system make automatic decisions?

No automatic legal, procurement, accounting, project or award decisions are claimed.

The public architecture describes review support with human-in-the-loop ownership, traceable recommendations and buyer governance boundaries.

What is the difference between deterministic logic and ML-assisted review?

Deterministic logic is rule-based workflow behavior such as scoring assumptions, completeness checks, grouping, review states and audit trail preparation.

ML-assisted review refers to candidate support patterns such as anomaly detection candidate, similarity search candidate, classification, ranking or retrieval-assisted review after buyer data and labels are understood.

Which modules have the strongest ML-readiness today?

The strongest AI-readiness path is Change Order & Claims Intelligence Pro because it has the Claims Pro AI evaluation path for extraction, structuring and semantic search evidence.

AI-augmented positioning requires controlled technical review, and public site does not claim production validation.

The other review-ready modules remain important ML-readiness paths: Change Order & Claims Intelligence, Project Risk Intelligence, Subcontractor Cost Control & Margin Leakage.

Risk and Subcontractor can discuss deterministic baselines, signal extraction, buyer labels and tenant-specific ML upgrade paths in qualified technical review.

Which modules are proposal-stage only?

BOQ / Cost Intelligence, Tender Comparison & Award remain proposal-stage only.

Their intelligence language should be read as proposal-stage scoping around BOQ item normalization, item similarity, offer normalization and comparability review, not review-ready source-package handover or automatic award decisions.

Claims Boundaries

What the public site does not promise.

Does this guarantee ROI?

No guaranteed ROI, savings, claim recovery or margin improvement is claimed.

Buyers are responsible for validating fit, outputs and commercial relevance in their own environment.

Does it replace accounting, ERP or payment systems?

No. It does not replace accounting, ERP, payment-processing, marketplace or compliance systems.

Subcontractor Cost Control & Margin Leakage is a decision-support and commercial-control module for review, not a financial system replacement.

Does it use validated predictive ML?

No validated predictive ML accuracy claim is made.

Public module descriptions emphasize deterministic review logic, workflow objects, review evidence and buyer-side validation needs.

First Contact

How to make the first email useful.

What happens before controlled source review?

Before controlled source review, the buyer route, module fit, intended use, disclosure expectations, commercial path and NDA/agreement checkpoint must be clear.

Public pages, module briefs, the questionnaire and the data requirements route should be reviewed before deeper disclosure.

What information should I include in the first email?

Include: Company:, Role:, Company type:, Module of interest:, Maturity route:, Workflow gap:, Current platform or stack:, Intended use:, Preferred review format:, Timeline:, NDA required?:, Specific evaluation question:.

Also mention any sensitive data that must not be shared before NDA, known missing data and the specific evaluation question.

Next step

Still a fit after the FAQ?

Send module interest, maturity route, workflow gap, current platform, intended use, review format, timeline and NDA requirement.

Direct line

labs@nivorqa.com

Use email for review-pack requests, module fit questions, licensing conversations and pilot scoping.