The premise
মূল ধারণা
TenderPulse uses AI for three concrete tasks: (a) extracting structured data from tender ZIPs (form fields, eligibility criteria, deadlines, BoQ tables), (b) drafting bid responses based on the extracted requirements and your company profile, and (c) answering procurement-related questions in our chat copilot. AI is not a brand wrapper around a chatbot — it is a set of focused capabilities deployed at specific points in your workflow.
We treat AI outputs as drafts, not as authoritative documents. Every bid you submit through e-GP carries your signature, not ours. We design the product so that human review of AI output is structurally easy: side-by-side source citations, highlight-on-hover for original document text, edit anywhere before submission.
Models and infrastructure
Model ও infrastructure
Our primary inference provider runs Anthropic Claude models in our approved Asia-Pacific region. We use Claude Haiku 4.5 for high-volume extraction tasks (structured-data parsing, eligibility scoring) and Claude Sonnet 4.6 for higher-quality drafting tasks. We do not currently use OpenAI, Google, Mistral, or any model hosted outside our managed cloud for production inference.
Our AI inference contract explicitly prohibits training on customer inputs. The managed AI layer does not retain prompts or responses for training purposes. Anthropic, as the model provider, has independently committed (and we have written confirmation under our enterprise agreement) that customer inputs flowing through inference are not used for model training.
Embeddings for retrieval — when we need to compare a tender requirement to your company’s prior work — are generated via Cohere Embed Multilingual v3 on the same managed inference layer. These embeddings are stored in a per-tenant vector database alongside your tenant data, encrypted at rest, and never shared across tenants.
What customer data does AI see
AI আপনার কোন data দেখে
Data classes that do flow to the AI inference path:
- Content from tender documents you upload — ITB, BoQ, technical specifications, evaluation criteria, schedule of items
- Your company profile fields needed for eligibility matching — registration data, financial limits, work specialisations, past projects (only the fields relevant to the specific tender requirement being evaluated)
- Your chat prompts in the procurement copilot
- Your accepted-output IDs, so the system can avoid regenerating content you have already approved
Data classes that do not flow to AI:
- Authentication credentials, OTP codes, password hashes
- Payment data, card numbers, mobile-banking tokens
- Other tenants’ data — the inference call is constructed with strict tenant scoping and the prompt template enforces single-tenant context
- Admin audit log entries
- Internal staff communications about your account
Hallucination is a defect, not a feature
Hallucination একটি defect
Large language models can produce plausible-looking but factually incorrect output. This is a known limitation of the current generation of generative AI. We treat every such occurrence as a defect to be designed against, not as an inherent quirk to be tolerated.
Our defences in depth:
- Citation enforcement — every claim in an AI output is required to be linked to a source span in your uploaded documents or your company profile. Outputs without grounded citations are flagged in the UI.
- Pre-submission review pane — the bid drafting interface forces a side-by-side comparison of every AI claim against the cited source before the draft can be exported.
- Confidence labels — outputs with low retrieval confidence are explicitly marked. Users see the confidence class (high / medium / low / unverified) for each section.
- Bounded autonomy — AI does not submit bids on your behalf. Final submission is always a human action requiring an explicit click in the e-GP submission flow.
Use cases we will not build
যা আমরা build করব না
The following are use cases we will not build, even if customers request them:
- Scanning competitor documents (uploaded without their consent) for compromising material that could be used in negotiations or disputes
- Generating false or misleading statements designed to deceive a procurement authority
- Automated bid submission that bypasses human review (the human-in-the-loop principle in §4 is non-negotiable)
- Surveillance features tracking individual users’ document opens, search queries, or chat history within a tenant for workplace-monitoring purposes
- Synthesising signatures, seals, or other authentication artefacts on documents
- Generating content that violates the Digital Security Act 2018 or ICT Act 2006
Bias and demographic effects
Bias ও fairness
We acknowledge that the underlying model (Claude) was trained on a corpus where English is over-represented relative to Bangla. This creates real risks: drafts generated in Bangla may be lower-quality than English drafts, eligibility extraction from Bangla tender documents may be less accurate, and regionally-specific procurement language (district-level work- category names, locally-recognised certifications) may be poorly handled.
Our mitigations: a curated Bangladesh-tender benchmark used for regression testing on every model upgrade, post-processing rules that catch known Bangla extraction weaknesses, a feedback loop where users can flag specific outputs as low-quality so we can prioritise the underlying gap, and a commitment to publish annually on the gap between our English and Bangla output quality so customers can see the trend.
Where AI output materially under-serves a Bangla user relative to an English user on a comparable task, we treat that as a fairness defect and prioritise the fix.
Right to a non-AI path
AI বাদ-এর অধিকার
If you prefer not to use AI features for any reason — privacy posture, legal advice from your counsel, internal policy, or personal preference — you can opt out from your account settings. With AI features off, the product still works: tender ZIP storage, manual eligibility checking against tender requirements, document organisation, audit log. The AI copilot and AI drafting are simply unavailable for that account.
Pricing for AI-off accounts reflects the reduced functionality — see the AI-off tier on the pricing page. You can switch the toggle on and off at any time.
For organisations with multiple users, the toggle can be enforced at the tenant level by the admin user, in which case individual users in that tenant cannot turn AI back on.