Technical review questionnaire

Prepare the review request before emailing.

Copy these prompts into email so the first Nivorqa Labs review can focus on module fit, data objects, integration assumptions and boundaries.

Who should use this questionnaire

Who should use it

Buyers with a concrete workflow question.

  • Construction software, ERP, integration, project-control, commercial-control or advisory buyers with a named workflow gap.
  • Product, engineering, delivery, partnerships or commercial stakeholders preparing a qualified technical review request.
  • Teams that can describe current systems, available data objects, integration assumptions and intended review format.

Who should not use it

Not for self-serve access or proof requests.

  • Buyers expecting public source-code download, automatic access, self-serve trial access or public pricing.
  • Teams seeking customer proof, named deployments, live client data or production deployment claims from the public site.
  • No legal advice, accounting/payment/compliance guarantees, guaranteed ROI or validated ML claims are available from the public site.

Module selection

Choose the route before filling the questionnaire.

The questionnaire uses the existing five public modules and preserves the review-ready versus proposal-stage split.

Review-ready source-package offers

Review-ready source-package offer

Change Order & Claims Intelligence Pro

Commercial construction teams reviewing change-event records before internal claim-readiness decisions.

Review-ready source-package offer

Project Risk Intelligence

Teams reviewing cost, schedule, procurement and issue signals across project-control workflows.

Review-ready source-package offer

Subcontractor Cost Control & Margin Leakage

Commercial teams reviewing subcontract package movement, variation exposure and budget variance.

Available under proposal

Questionnaire sections

Fill the context that makes a review useful.

These fields are meant to be copied into an email, not submitted through this website.

Module selection

  • Primary module of interest:
  • Maturity route expected:
  • Secondary module to compare:
  • Why this module appears relevant:
  • Which route should be excluded:

Company and workflow context

  • Company type:
  • Buyer role:
  • Workflow owner:
  • Current workflow pain:
  • Decision or review process this should support:

Technical environment

  • Current platform or stack:
  • Relevant ERP, PMO, contract, cost or document systems:
  • Data format currently available:
  • Integration owner:
  • Security or deployment constraints to mention before deeper review:

Data availability

  • Available sample data type:
  • Synthetic or buyer-approved sample possible?:
  • Data quality concerns:
  • Fields likely missing or inconsistent:
  • Labels or review outcomes available for ML-assisted evaluation?:
  • Data that must not be shared before NDA:

Intelligence Layer readiness

  • Candidate ML-assisted pattern to discuss:
  • Deterministic workflow logic already accepted?:
  • Completeness, confidence or anomaly signals expected?:
  • Human-in-the-loop reviewer:
  • Governance boundary or auditability question:

ML/data readiness questions

  • Available historical data:
  • Number of projects/packages/events:
  • Label availability:
  • Target outcome definitions:
  • Time coverage:
  • Data completeness:
  • Permission to use sample or synthetic data?:
  • Validation metric preference:
  • False-positive/false-negative tolerance:
  • Human review owner:
  • Security/data sharing constraints:

Integration assumptions

  • Likely input source:
  • Expected output destination:
  • User workflow or review queue:
  • Reporting or audit-trail need:
  • Host-platform adaptation assumptions:

Review format

  • Preferred review format:
  • Stakeholders attending:
  • Questions to answer in first review:
  • Review-pack or module-detail page already reviewed?:
  • Follow-up path if fit is confirmed:

Timeline and NDA

  • Timeline:
  • NDA required before deeper detail?:
  • Commercial path under consideration:
  • Internal decision date:
  • Anything that would block a qualified review:

Disqualifying expectations

  • Is public source access expected?:
  • Is public pricing required?:
  • Is production proof required before review?:
  • Are legal/accounting/payment/compliance guarantees expected?:
  • Are ROI, recovery, savings, margin-improvement or validated ML claims expected?:
  • Are automatic legal, procurement, accounting or award decisions expected?:

Module-specific prompt blocks

Add the module prompts that match the route.

Use these prompts to describe available workflow objects before any deeper review.

Prompt block

Project Risk Intelligence

  • Risk register: what fields, owners, severity labels and update cadence exist?
  • Cost exposure: which exposure records or cost-pressure indicators can be reviewed?
  • Schedule pressure: which schedule signals or milestone-pressure fields are available?
  • Procurement status: which procurement updates or long-lead items influence risk review?
  • Issue records: which open issues, actions or escalation notes should be represented?
View related route

Prompt block

Change Order & Claims Intelligence Pro

  • Change events: how are events opened, categorized, owned and closed?
  • Notice records: which notices, dates, references and contractual workflow fields exist?
  • Evidence references: where are documents, photos, correspondence or supporting records held?
  • Contract clause extraction: which contract documents and clause labels could be reviewed under controlled discussion?
  • Analogous case semantic search: are buyer-curated golden cases and relevance judgments available?
  • Commercial impacts: which time, cost or scope impact fields can be reviewed?
  • Open actions: what review, approval or follow-up actions remain unresolved?
View related route

Prompt block

Subcontractor Cost Control & Margin Leakage

  • Subcontract package records: which packages, scopes, budgets and owners are represented?
  • Commitments: which committed values, awards or approved changes are available?
  • Margin exposure: where do exposure indicators, allowance movement or cost pressure appear?
  • Variation/claim indicators: which variation, claim or extra records affect commercial control?
  • Package status: which approval, payment, certification or closeout states matter?
View related route

Prompt block

BOQ / Tender proposal-stage scoping

  • BOQ / Cost Intelligence is proposal-stage only: describe BOQ structure, unit assumptions, cost categories and ERP handoff questions.
  • Tender Comparison & Award is proposal-stage only: describe bidder comparison, scoring support, clarification records and audit-trail needs.
  • Confirm which buyer data and workflow assumptions must be validated before any deeper source-package discussion.
  • Do not frame BOQ or Tender as review-ready source-package offers.
View related route

Prompt block

Intelligence Layer and ML readiness

  • Which deterministic workflow logic should be reviewed before any ML-assisted pattern is discussed?
  • Which candidate pattern is relevant: anomaly detection, similarity search, classification, ranking, embeddings/RAG or lightweight forecasting?
  • What buyer data, labels or review outcomes are available for validation?
  • Who owns human-in-the-loop review, traceability, logging expectations and drift/evaluation review?
  • Confirm no validated predictive ML accuracy claim and no guaranteed ROI are expected.
View related route

ML/data readiness questions

Assess data readiness before asking for review.

Use these prompts to describe historical coverage, labels, validation preferences, human review ownership and data sharing constraints. They are copy prompts only; nothing is submitted or uploaded here.

General readiness

Questions for any module.

  • What available historical data exists for the module workflow?
  • How many projects, packages, events, BOQ items or tender offers are represented?
  • Which labels or review outcomes are available, and who created them?
  • How are target outcome definitions written and validated by the buyer?
  • What time coverage exists, including snapshot frequency and date completeness?
  • Which required fields are complete, missing, inconsistent or unreliable?
  • Can synthetic or buyer-approved sample data be used before deeper review?
  • Which validation metric does the buyer prefer for any evaluated ML-assisted pattern?
  • What false-positive and false-negative tolerance is acceptable for review prioritization?
  • Who is the human review owner for surfaced signals, exceptions and recommendations?
  • What security, confidentiality, retention or data sharing constraints apply before NDA or agreement?

Module-specific ML/data prompts

Ask the readiness questions that match the module route.

Review-ready modules can discuss historical workflow data and labels. BOQ and Tender remain proposal-stage only for item normalization and offer comparison scoping.

Review-ready source-package offer Project Risk

Project Risk Intelligence

Review-ready module. ML/data readiness can be reviewed for deterministic risk scoring, signals and candidate anomaly or forecasting support.

  • Are historical project snapshots available?
  • Which cost, schedule, procurement and issue signals exist across snapshots?
  • Are overrun, delay, claim or quality outcome labels available?
  • What time horizon should review support: weeks, months, project phase or closeout?
  • How many projects and snapshots are represented, and how complete are the timestamps?
Review module route
Review-ready source-package offer Claims Pro

Change Order & Claims Intelligence Pro

Claims Pro AI evaluation path. ML/data readiness can be reviewed for contract clause extraction, change event narrative structuring, analogous case semantic search, methodology validation and service-mode evaluation scope without legal advice or production validation claims.

  • Are contract documents available for clause extraction review under controlled discussion?
  • Are buyer-curated golden cases or agreed evaluation cases available for buyer-specific validation?
  • Are indexable case summaries available for analogous case semantic search?
  • How many change events, notices and evidence references are available?
  • Are claim outcomes available as buyer-reviewed labels?
  • Are accepted, rejected or partially accepted labels available?
  • Which disputed events should be represented, and which should remain excluded before NDA?
  • Are recovery amounts available if the buyer is willing to evaluate them?
Review module route
Review-ready source-package offer Subcontractor Margin

Subcontractor Cost Control & Margin Leakage

Review-ready module. ML/data readiness can be reviewed for package history, margin exposure signals and candidate anomaly or similarity support.

  • Is subcontract package history available across commitments and cost events?
  • Are progress, certification or payment snapshots available?
  • Is final margin outcome available as a buyer-reviewed label?
  • Are unapproved extras that materialized available as labels?
  • Are payment delay labels available for review?
Review module route
Available under proposal BOQ / Cost

BOQ / Cost Intelligence

Proposal-stage only. ML/data readiness is limited to item normalization and item similarity scoping.

  • Which BOQ item structures and unit conventions are available for proposal-stage scoping?
  • Are item normalization labels or buyer-approved category mappings available?
  • Are item similarity examples available for review?
  • Which pricing fields are sensitive and should stay excluded before NDA?
  • Confirm this remains proposal-stage only and does not imply a review-ready source package.
Review module route
Available under proposal Tender

Tender Comparison & Award

Proposal-stage only. ML/data readiness is limited to offer normalization and offer comparison scoping.

  • Which tender offers and package scopes are available for proposal-stage scoping?
  • Are offer normalization labels or comparison examples available?
  • Are exclusion, qualification or clarification labels available?
  • Which bidder data permissions and confidentiality constraints apply?
  • Confirm this remains proposal-stage only and does not imply automatic award decisions.
Review module route

Copy-ready email fields

Paste this into the email body.

Keep confidential project data out of the first email unless an NDA is already in place.

Company:
Role:
Company type:
Primary module of interest:
Secondary module to compare:
Maturity route expected:
Workflow gap:
Current platform or stack:
Technical environment:
Available data objects:
Data quality or sharing constraints:
Intelligence Layer or candidate ML-assisted pattern:
Available historical data:
Number of projects/packages/events:
Label availability:
Target outcome definitions:
Time coverage:
Data completeness:
Sample or synthetic data permission:
Preferred validation metric:
False-positive/false-negative tolerance:
Human review owner:
Security/data sharing constraints:
Buyer-side validation owner:
Integration assumptions:
Preferred review format:
Timeline:
NDA required?:
Module-specific notes:
Disqualifying expectations to rule out:
Specific evaluation question:

Static-site boundary

Nothing is submitted or stored here.

The questionnaire is a copy aid only.

  • No form submission.
  • No local storage.
  • No form tracking or CRM workflow.
  • No backend or CRM workflow.
  • No public source access, public pricing or fake proof.

Next step

Send the completed questionnaire by email.

Use the prepared email fields to share module interest, workflow context, data availability, integration assumptions and review timing.

Direct line

labs@nivorqa.com

Use email for review-pack requests, module fit questions, licensing conversations and pilot scoping.