AI approach

Deterministic-first. AI where it earns its keep.

Claims Pro is the first AI-augmented review path, activated in pilot under controlled discussion. The deterministic Claims workflow remains the baseline; AI-assisted clause extraction, narrative structuring and analogous search are evaluated only with human verification, audit logging and buyer-side validation.

Where AI is scoped first

The controlled AI review path starts with Claims Pro.

Claims Pro is the first AI-augmented review path, activated in pilot under controlled discussion. The deterministic Claims workflow remains the baseline; AI-assisted clause extraction, narrative structuring and analogous search are evaluated only with human verification, audit logging and buyer-side validation.

Three principles

A conservative AI strategy for construction workflow review.

The public narrative separates deterministic product behavior, AI-assisted review support and buyer-owned decisions.

Principle

Deterministic baseline always ships.

  • Every module has deterministic, auditable workflow logic before AI or ML is considered.
  • Baseline behavior can be reviewed through rules, inputs, outputs, review states and audit trail objects.
  • AI is additive, feature-flagged and human-verified only where evidence supports it.

Principle

Every AI output is logged with prompt version, model id and input hash.

  • Claims Pro AI calls are designed around traceable prompt version, model id and input hash records.
  • Audit logs support reviewer traceability; they do not create autonomous legal or commercial decisions.
  • Buyer-side governance decides retention, access, provider configuration and review workflow.

Principle

No outcome prediction, no legal advice, no autonomous decisions.

  • No claim outcome accuracy, probability of success or guaranteed recovery is claimed.
  • No legal advice, entitlement assessment or claim-validity determination is provided.
  • No autonomous procurement, award, accounting, payment or compliance decision is claimed.

Evidence status

Claims Pro AI is public as a controlled-pilot review path unless controlled evidence says otherwise.

The site separates deterministic baseline behavior, controlled-pilot AI evidence, roadmap ML and buyer-data-dependent validation.

Review-ready source-package offer Eval-gated AI path, deterministic-first Claims Pro AI evaluation path

Change Order & Claims Intelligence Pro

AI evidence status: eval_harness_only

Claims Pro is the first AI-augmented review path, activated in pilot under controlled discussion. The deterministic Claims workflow remains the baseline; AI-assisted clause extraction, narrative structuring and analogous search are evaluated only with human verification, audit logging and buyer-side validation.

Claims Pro AI evaluation path is evidence-gated and activated in pilot under controlled discussion. AI assists extraction, structuring and search. It does not predict claim outcome, provide legal advice, determine entitlement or guarantee recovery. Human verification and buyer-side validation are required.

Claims Pro AI evaluation path

Extraction, structuring and search are controlled-pilot review support.

Claims Pro AI features are activated in pilot under controlled discussion and require controlled technical review. Public site does not claim production availability, production validation, live model accuracy or public production model performance without buyer-specific validation and service-mode evaluation.

  • AI-assisted contract clause extraction evaluation path with citation verification.
  • AI-assisted change event narrative structuring evaluation path.
  • Analogous case search evaluation path activated in pilot under controlled discussion.
  • Audit-log design for AI calls under controlled technical review.
  • Human verification before buyer action.

Implementation maturity

The public site names four evidence buckets.

This keeps Claims Pro AI language from implying more maturity than the controlled bundle or source package supports.

Evidence bucket

What is implemented

  • Deterministic workflow baselines are the public reviewable module surface.
  • Review-ready modules can be assessed for source-package fit under qualification.
  • The public site does not claim completed Claims Pro AI implementation without controlled bundle evidence.

Evidence bucket

What is eval-only

  • Claims Pro AI evaluation path is the conservative public posture when no v4 implementation-and-eval payload is available.
  • Eval harness evidence may cover methodology validation for clause extraction, narrative structuring, analogous case search and audit-log design.
  • Service-mode evaluation available during controlled technical review when a provider/model is configured.
  • AI-augmented positioning requires controlled technical review.

Evidence bucket

What is roadmap

  • Project Risk tenant-specific ML upgrade path.
  • Subcontractor Margin tenant-specific ML upgrade path.
  • BOQ and Tender proposal-stage intelligence outlines.

Evidence bucket

What requires buyer data

  • Buyer documents, labels, buyer-curated goldens or agreed evaluation cases and evaluation design.
  • False-positive and false-negative tolerance.
  • Human review owner, provider constraints and governance boundary.
  • Buyer-side validation required before operational reliance.

Where ML is on roadmap

Risk and Subcontractor keep tenant-specific ML upgrade paths.

These modules remain deterministic-first. ML is discussed only after buyer data readiness, labels, evaluation design and governance are clear.

Roadmap

Tenant-specific ML only after readiness.

  • Project Risk remains deterministic-first, with tenant-specific ML only after historical project snapshots, labels and evaluation design are validated.
  • Subcontractor Margin remains deterministic-first, with tenant-specific ML only after package history, cost events, margin labels and review thresholds are validated.
  • BOQ / Cost Intelligence and Tender Comparison & Award remain proposal-stage only and do not carry an AI flagship claim.
Review-ready source-package offer

Project Risk Intelligence

Tenant-specific ML can be scoped only after buyer data readiness criteria are met. Candidate upgrade paths may include anomaly detection, heuristic forecasting support and review-priority ranking. Buyer validates historical project snapshots, labels, evaluation design and false-positive/false-negative tolerance. No AI layer is claimed as shipping today for Project Risk Intelligence.

Review-ready source-package offer

Subcontractor Cost Control & Margin Leakage

Tenant-specific ML can be scoped only after buyer data readiness criteria are met. Candidate upgrade paths may include anomaly detection candidate, package similarity candidate and leakage exposure ranking. Buyer validates package history, final margin labels, cost event semantics and review thresholds. No AI layer is claimed as shipping today for Subcontractor Cost Control & Margin Leakage.

Eval harness

Evaluation is measured by task, not by commercial outcome.

The AI layer can be discussed through eval harness methodology validation, service-mode evaluation available during controlled technical review, model cards and known failure modes. Buyer-side validation required. No public production model performance claim is made.

Evaluation design

Metrics for Claims Pro AI tasks.

  • Eval harness methodology validation for Precision/recall by clause type for contract clause extraction; buyer-side validation required.
  • Eval harness methodology validation for Macro F1 for release-level extraction review; buyer-side validation required.
  • Eval harness methodology validation for narrative parsing agreement for change event narrative structuring; buyer-side validation required.
  • Eval harness methodology validation for Recall@K for analogous case semantic search; buyer-side validation required.
  • Model cards and known failure modes for buyer review; buyer-side validation required.
  • No public production model performance claim.

Tenant data sovereignty

Provider and deployment choices are controlled discussions.

Public pages do not collect data, keys, credentials or files.

  • Configurable provider as a controlled technical discussion.
  • Buyer provider key or local-mode deployment as controlled discussion.
  • AI audit log for prompt version, model id, input hash, output reference and reviewer action where AI is enabled.
  • No live client data collection through the public site.

What is not claimed

No unsafe AI, legal, procurement or outcome claims.

  • No claim outcome accuracy.
  • No probability of success.
  • No legal advice.
  • No claim-validity determination or entitlement assessment.
  • No autonomous decisions.
  • No guaranteed ROI, recovery or margin improvement.
  • No validated predictive ML accuracy claim as a public claim.

Related review paths

Qualified buyers should pair AI questions with data requirements, ML readiness, model governance and technical review.

Next step

Bring AI questions into qualified technical review.

Send the module interest, data readiness, provider constraints, evaluation questions and governance owner before deeper discussion.

Direct line

labs@nivorqa.com

Use email for review-pack requests, module fit questions, licensing conversations and pilot scoping.