Nivorqa Intelligence Layer

Deterministic-first intelligence architecture with explicit governance boundaries.

A public overview of deterministic workflow logic, construction data signal extraction, evidence-gated Claims Pro AI evaluation and ML-ready upgrade paths without unsafe ML, production, ROI, legal or compliance claims.

Synthetic architecture outline

A visual map of the review path.

The outline shows deterministic core, signal extraction, ML-assisted review, governance and human-in-the-loop validation without buyer records or production screenshots.

Synthetic-only architecture outline. No buyer records shown. Deterministic core, signal extraction, ML-assisted review, governance validation and human-in-the-loop review.
Synthetic-only architecture outline. No buyer records shown. No validated predictive ML accuracy claim.

Operating model

Deterministic logic before ML-assisted patterns.

The layer starts with auditable workflow objects and only then identifies candidate ML support for buyer-side validation.

  • Deterministic workflow logic structures review queues, scoring, completeness checks and audit trails first.
  • Construction data signal extraction turns existing records into reviewable indicators without public data collection.
  • Claims Pro AI evaluation path is separate from the general Intelligence Layer and remains evidence-gated until controlled technical review confirms implementation status and service-mode evaluation scope.
  • ML-assisted review and prioritization can be scoped only after buyer data, labels, quality and evaluation design are understood.
  • Eval harness means methodology validation of schema, metrics, gates and governance, not a public production model performance claim.
  • Explainability, auditability and governance remain visible through source references, reason notes, review states and buyer validation.
  • Human-in-the-loop review remains required before legal, procurement, accounting, project or commercial action.

Governance boundary

ML-ready does not mean validated.

Public language stays conservative across every module and candidate pattern.

  • ML-ready means structured for future evaluation; it does not mean universally validated.
  • ML-assisted means review support; it does not mean autonomous decision-making.
  • Claims Pro AI evaluation path is evidence-gated until implementation and service-mode evaluation evidence are confirmed in controlled technical review.
  • AI-augmented positioning requires controlled technical review.
  • Public site does not claim production validation or public production model performance.
  • Eval harness language refers to methodology validation of schema, metrics, gates and governance; it does not prove live model accuracy.
  • Service-mode evaluation available during controlled technical review after a provider/model is configured.
  • AI is additive, feature-flagged and human-verified where it is enabled.
  • Buyer-side validation required for data quality, labels, workflow fit, evaluation design and governance.
  • Human-in-the-loop review remains required before buyer action.
  • No validated predictive ML accuracy claim.
  • No validated supervised ML claim.
  • No guaranteed ROI, savings, claim recovery, margin improvement or outcome.
  • No legal advice, claim-validity determination, accounting replacement, payment-processing guarantee, compliance certification or automatic award decisions.
  • No production-readiness guarantee, plug-and-play production deployment, public source-code download, free sandbox, automatic marketplace access, customer proof or live client data.

Evidence status

Claims Pro AI is distinct from general ML-ready module posture.

The Intelligence Layer names signals and candidate patterns, while module evidence status decides whether public copy can describe implementation-present AI evidence, eval-only AI evidence, deterministic ML upgrade path or proposal-stage outline.

Public posture

Four public evidence buckets.

  • Claims Pro: eval-only AI evidence by default, with AI-augmented positioning requiring controlled technical review and buyer-side validation required.
  • Project Risk: deterministic_with_ml_upgrade_path; no AI layer is claimed as shipping today.
  • Subcontractor Margin: deterministic_with_ml_upgrade_path; tenant-specific ML depends on buyer data readiness.
  • BOQ and Tender: proposal_stage_intelligence_outline; proposal-stage only.

Architecture principles

Six public concepts explain the Intelligence Layer safely.

These concepts frame the relationship between deterministic workflow logic, signal extraction, ML-assisted review and human governance.

Intelligence Layer

Deterministic workflow logic

Core review queues, scoring, completeness checks, status rules and audit trails are explainable workflow logic before any ML-assisted pattern is considered.

  • Rule-based risk, evidence, package, BOQ and tender review objects.
  • Visible assumptions for scoring, completeness and prioritization.
  • Human-in-the-loop review before buyer action.

Intelligence Layer

Construction data signal extraction

The layer turns construction records into reviewable signals without asking the public site to collect data or process uploads.

  • Signal extraction from risk, schedule, cost, evidence, package, BOQ and tender objects.
  • Completeness, anomaly, similarity and classification candidates for buyer-side validation.
  • Synthetic examples on the public site; no buyer records.

Intelligence Layer

ML-assisted review and prioritization

ML-assisted patterns can help route review attention after buyer data, labels, evaluation design and governance are defined. Claims Pro is an evidence-gated AI evaluation path unless controlled bundle evidence confirms implementation and service-mode evaluation are both present.

  • Anomaly detection candidate for records that merit review.
  • Similarity search candidate for evidence, items, packages or offers.
  • Classification and ranking candidates that support review queues, not autonomous decisions.

Intelligence Layer

Explainability, auditability and governance

Recommendations should remain traceable to data fields, workflow rules, confidence or completeness signals and model usage logs where applicable.

  • Explainability notes for surfaced signals.
  • Auditability through source references, review status and decision trail concepts.
  • Governance boundary: no automatic legal, procurement or accounting decisions.

Intelligence Layer

Buyer-side validation

ML-ready does not mean universally validated; buyer-side validation required before operational reliance.

  • Buyer data and labels are required for supervised or evaluated patterns.
  • Model performance depends on data quality, sampling, labels and evaluation design.
  • No validated predictive ML accuracy claim and no guaranteed ROI.

Intelligence Layer

Human-in-the-loop review

The layer supports people reviewing construction workflows; it does not replace buyer governance or professional judgment.

  • Human reviewers retain decision responsibility.
  • Confidence and completeness signals should guide review priority, not force outcomes.
  • Buyer teams decide legal, procurement, accounting and commercial actions.

Candidate ML patterns

ML-assisted patterns remain candidates until buyer validation.

The public site may name patterns that could be evaluated; it does not claim validated supervised ML, universal accuracy, live model accuracy, public production model performance or production deployment.

Candidate pattern

Anomaly detection

Candidate pattern for surfacing records that differ from buyer-approved baselines, with transparent review notes and buyer-side validation required.

  • Risk signals
  • Evidence gaps
  • Margin exposure
  • Price anomalies
  • Offer comparability questions

Candidate pattern

Similarity search

Candidate pattern for comparing evidence, BOQ items, subcontract packages or tender offers after data permissions and evaluation rules are defined.

  • Evidence similarity
  • Package similarity
  • Item similarity
  • Offer similarity

Candidate pattern

Classification

Candidate pattern for assigning review categories when buyer data and labels exist; no validated supervised ML claim is made publicly.

  • Change-event classification candidate
  • Evidence category candidate
  • Tender exclusion or qualification detection candidate

Candidate pattern

Ranking

Candidate pattern for prioritizing review queues where the buyer validates scoring criteria, labels, thresholds and review outcomes.

  • Risk priority
  • Leakage exposure ranking
  • Missing-evidence priority
  • Comparability review priority

Candidate pattern

Embeddings/RAG

Candidate pattern for retrieval-assisted review of source references when permissions, citation behavior and governance boundaries are agreed.

  • Evidence retrieval
  • Contract-reference support
  • Review note grounding
  • Traceable source references

Candidate pattern

Lightweight forecasting

Candidate pattern for heuristic forecasting support only after buyer data, evaluation design and limits are agreed.

  • Schedule pressure context
  • Cost exposure context
  • Risk trend review
  • No validated predictive ML accuracy claim

Module-specific intelligence examples

Each module has different intelligence signals and limits.

Review-ready modules can support qualified technical review. BOQ and Tender remain proposal-stage only.

Review-ready source-package offer Eval-gated AI path, deterministic-first

Change Order & Claims Intelligence Pro

Claims Pro controlled-pilot AI review path: deterministic-first Claims Pro with eval harness evidence for extraction, structuring and analogous case search pending controlled technical review.

AI evidence status

eval_harness_only: The public posture is controlled-pilot AI evidence: clause extraction, narrative structuring and analogous case search are presented as eval harness methodology validation, not production availability, production validation or public production model performance.

Claims Pro AI features are activated in pilot under controlled discussion and require controlled technical review. Public site does not claim production availability, completed AI implementation, live model accuracy or public production model performance without service-mode evaluation and buyer-specific validation.

Intelligence examples

  • Evidence completeness.
  • Evidence grouping.
  • Evidence intelligence support.
  • Missing-evidence signals.
  • Contract clause extraction with verifiable citation evaluation path.
  • Change event narrative structuring evaluation path.
  • Analogous case semantic search evaluation path.

Buyer validation

  • Confirm whether the reviewed bundle is eval_harness_only or implementation_and_eval_present before using AI-augmented implementation language.
  • Validate extracted clauses against source documents and citation references.
  • Validate precision/recall by clause type, macro F1, narrative parsing agreement and Recall@K for analogous search as methodology validation; buyer-side validation required.
  • Confirm buyer-curated goldens or agreed evaluation cases before buyer-specific validation.
  • Confirm service-mode evaluation scope separately from eval harness methodology validation.
  • Confirm model card, prompt version, model id, input hash and known failure mode logging expectations.
  • Confirm the workflow supports decision review only and does not replace legal governance.

Governance boundary

  • Traceable evidence references and review-state audit notes.
  • Every AI output is logged with prompt version, model id and input hash where AI is enabled.
  • AI assists extraction, structuring and search. It does not predict claim outcome, provide legal advice, determine entitlement or guarantee recovery. Human verification and buyer-side validation are required.
  • AI-augmented positioning requires controlled technical review.
  • Public site does not claim production validation or public production model performance.
  • Human-in-the-loop review before any claim or commercial action.
  • No legal advice, no claim-validity determination and no guaranteed claim recovery.
Review-ready source-package offer Deterministic-first with ML upgrade path

Project Risk Intelligence

Deterministic-first review-ready source-package offer with documented tenant-specific ML upgrade path.

AI evidence status

deterministic_ml_ready: Deterministic risk scoring, completeness scoring and signal extraction are the public evidence; tenant-specific ML remains an upgrade path.

No AI layer is claimed as shipping today for Project Risk. Buyer data, labels and evaluation design are required for any ML upgrade.

Intelligence examples

  • Deterministic risk scoring.
  • Data completeness scoring.
  • Cost exposure signals.
  • Schedule pressure signals.
  • Transparent anomaly detection upgrade path.
  • Heuristic forecasting support.
  • ML-ready dataset export.

Buyer validation

  • Validate risk taxonomy, scoring assumptions, cost/schedule/procurement signals and labels.
  • Confirm whether anomaly and forecasting support should remain heuristic or move into evaluated ML-assisted patterns.
  • Confirm no validated predictive ML accuracy claim is being inferred from public material.

Governance boundary

  • Human-in-the-loop review before buyer action.
  • Explainability through visible scoring assumptions, source signals and audit notes.
  • No autonomous project decisions, no guaranteed ROI and no production-readiness guarantee.
Review-ready source-package offer Deterministic-first with ML upgrade path

Subcontractor Cost Control & Margin Leakage

Deterministic-first review-ready source-package offer with documented tenant-specific ML upgrade path.

AI evidence status

deterministic_ml_ready: Deterministic package review, variance review and margin leakage signals are the public evidence; tenant-specific ML remains an upgrade path.

No AI layer is claimed as shipping today for Subcontractor Margin. Buyer package history, labels and evaluation design are required for any ML upgrade.

Intelligence examples

  • Margin exposure signals.
  • Leakage exposure ranking.
  • Margin leakage signals.
  • Commitment, budget and award movement signals.
  • Anomaly detection candidate.
  • Package similarity candidate.
  • Exposure ranking candidate.

Buyer validation

  • Validate package identity, budget and award fields, commitment states, variation labels and exposure definitions.
  • Confirm ranking, anomaly and similarity assumptions before operational review.
  • Confirm this remains commercial-control review support, not accounting or payment processing.

Governance boundary

  • Completeness and exposure signals should be traceable to package records.
  • Human-in-the-loop review before buyer commercial action.
  • No accounting replacement, no payment-processing guarantee and no guaranteed margin improvement.
Available under proposal Proposal-stage module

BOQ / Cost Intelligence

Proposal-stage intelligence outline. Available under proposal; scoped buyer validation is required before deeper discussion.

AI evidence status

roadmap_only: Proposal-stage intelligence outline only for item normalization, item similarity and price anomaly review scoping.

No implemented AI feature, review-ready source package or current controlled source handover is implied.

Intelligence examples

  • BOQ line and unit review outline.
  • Cost category mapping outline.
  • Item structure and unit consistency signals.
  • Cost category and ERP handoff signals.
  • Item normalization candidate.
  • Item similarity candidate.
  • Price anomaly review candidate.

Buyer validation

  • Validate BOQ structure, unit conventions, cost categories, pricing sensitivity and ERP handoff assumptions.
  • Confirm whether item normalization, item similarity or price anomaly review has enough buyer-approved data for scoping.
  • Confirm this remains proposal-stage only until buyer assumptions are validated.

Governance boundary

  • Proposal-stage intelligence outline only.
  • No estimating or pricing accuracy guarantee.
  • No production-readiness guarantee and no public upload or buyer data processing.
  • Buyer-specific validation required before any controlled source-package discussion.
Available under proposal Proposal-stage module

Tender Comparison & Award

Proposal-stage intelligence outline. Available under proposal; scoped procurement validation is required before deeper discussion.

AI evidence status

roadmap_only: Proposal-stage intelligence outline only for offer normalization, exclusion or qualification detection and comparability review scoping.

No implemented AI feature, automatic award decision, review-ready source package or current controlled source handover is implied.

Intelligence examples

  • Bidder comparison matrix outline.
  • Package-level evaluation outline.
  • Offer normalization signals.
  • Exclusion and qualification signals.
  • Offer normalization candidate.
  • Exclusion/qualification detection candidate.
  • Comparability review candidate.

Buyer validation

  • Validate bidder data permissions, package scope, exclusion and qualification labels, scoring governance and approval workflow.
  • Confirm whether offer normalization or comparability review has enough buyer-approved data for scoping.
  • Confirm this remains proposal-stage only and does not automate award decisions.

Governance boundary

  • Proposal-stage intelligence outline only.
  • Procurement governance stays buyer-owned.
  • Human-in-the-loop review before any award or procurement action.
  • No automatic award decisions, no autonomous procurement decisions and no guaranteed tender outcomes.

Related architecture

These pages separate ML-ready data structures from validated model claims and operational governance decisions.

Next step

Bring intelligence questions into qualified technical review.

Send the module interest, available data objects, candidate ML pattern, validation owner and governance question before any deeper discussion.

Direct line

labs@nivorqa.com

Use email for review-pack requests, module fit questions, licensing conversations and pilot scoping.