TenderPulse
RESPONSIBLE AI POLICY · v2026-05-07.v1

AI as a tool for human work.

Which AI models we use, the guardrails around them, the no-training-on-customer-data guarantee, and where we will not let AI act autonomously. Last updated 7 May 2026.

No training on customer dataAI inference (approved region)Human-in-the-loop on every output
Download PDF (soon)
1

The premise

মূল ধারণা

এক নজরে · IN BRIEF
AI আমাদের জন্য একটা tool — বিড লেখার সময় বাঁচায়, eligibility analysis দ্রুত করে, document খুঁজে দেয়। কিন্তু final submission কখনও AI-only নয় — সবসময় human review আছে।

TenderPulse uses AI for three concrete tasks: (a) extracting structured data from tender ZIPs (form fields, eligibility criteria, deadlines, BoQ tables), (b) drafting bid responses based on the extracted requirements and your company profile, and (c) answering procurement-related questions in our chat copilot. AI is not a brand wrapper around a chatbot — it is a set of focused capabilities deployed at specific points in your workflow.

We treat AI outputs as drafts, not as authoritative documents. Every bid you submit through e-GP carries your signature, not ours. We design the product so that human review of AI output is structurally easy: side-by-side source citations, highlight-on-hover for original document text, edit anywhere before submission.

PRO-USER CLAUSE
You own every AI output. Per our EULA §4, the IP in AI outputs generated for you belongs to you. We claim no ownership over your AI-derived bid drafts, eligibility analyses, or chat responses. We also claim no licence to reuse them, retrain on them, or anonymise them for general improvement.
2

Models and infrastructure

Model ও infrastructure

এক নজরে · IN BRIEF
আমরা Anthropic-এর Claude family ব্যবহার করি, আমাদের AI inference-এর approved Asia-Pacific region-এ। এই region-এ host করার মানে — আপনার prompt কখনও Bangladesh / EU outside যায় না। the AI inference contract-এ লেখা: no training on customer inputs

Our primary inference provider runs Anthropic Claude models in our approved Asia-Pacific region. We use Claude Haiku 4.5 for high-volume extraction tasks (structured-data parsing, eligibility scoring) and Claude Sonnet 4.6 for higher-quality drafting tasks. We do not currently use OpenAI, Google, Mistral, or any model hosted outside our managed cloud for production inference.

Our AI inference contract explicitly prohibits training on customer inputs. The managed AI layer does not retain prompts or responses for training purposes. Anthropic, as the model provider, has independently committed (and we have written confirmation under our enterprise agreement) that customer inputs flowing through inference are not used for model training.

Embeddings for retrieval — when we need to compare a tender requirement to your company’s prior work — are generated via Cohere Embed Multilingual v3 on the same managed inference layer. These embeddings are stored in a per-tenant vector database alongside your tenant data, encrypted at rest, and never shared across tenants.

3

What customer data does AI see

AI আপনার কোন data দেখে

এক নজরে · IN BRIEF
AI দেখে: আপনার uploaded tender ZIP-এর content, আপনার company profile (যাতে eligibility match করতে পারে), আপনার চ্যাট prompt। AI দেখে না: আপনার login password, payment data, অন্য customer-এর কিছু।

Data classes that do flow to the AI inference path:

  • Content from tender documents you upload — ITB, BoQ, technical specifications, evaluation criteria, schedule of items
  • Your company profile fields needed for eligibility matching — registration data, financial limits, work specialisations, past projects (only the fields relevant to the specific tender requirement being evaluated)
  • Your chat prompts in the procurement copilot
  • Your accepted-output IDs, so the system can avoid regenerating content you have already approved

Data classes that do not flow to AI:

  • Authentication credentials, OTP codes, password hashes
  • Payment data, card numbers, mobile-banking tokens
  • Other tenants’ data — the inference call is constructed with strict tenant scoping and the prompt template enforces single-tenant context
  • Admin audit log entries
  • Internal staff communications about your account
4

Hallucination is a defect, not a feature

Hallucination একটি defect

এক নজরে · IN BRIEF
AI মাঝেমধ্যে ভুল information generate করতে পারে — এটা আমাদের কাছে একটা bug, করণীয় কিছু নয়। সব output-এর সাথে আমরা source citation দিই; কিছু verify না করে বিড submit না করার জন্য UI-তেও prompt আছে।

Large language models can produce plausible-looking but factually incorrect output. This is a known limitation of the current generation of generative AI. We treat every such occurrence as a defect to be designed against, not as an inherent quirk to be tolerated.

Our defences in depth:

  • Citation enforcement — every claim in an AI output is required to be linked to a source span in your uploaded documents or your company profile. Outputs without grounded citations are flagged in the UI.
  • Pre-submission review pane — the bid drafting interface forces a side-by-side comparison of every AI claim against the cited source before the draft can be exported.
  • Confidence labels — outputs with low retrieval confidence are explicitly marked. Users see the confidence class (high / medium / low / unverified) for each section.
  • Bounded autonomy — AI does not submit bids on your behalf. Final submission is always a human action requiring an explicit click in the e-GP submission flow.
PRO-USER CLAUSE
If an AI output contains a material factual error that you can demonstrate (e.g. a fabricated figure, a misquoted statute, a non-existent regulation), and you submitted the output to a procurement authority in reliance on it, you can claim a service credit. We treat hallucinated factual errors the same as a measurable bug. Submit the example to help@tenderpulse.com.bd and we will respond within 5 business days.
5

Use cases we will not build

যা আমরা build করব না

এক নজরে · IN BRIEF
আমরা AI দিয়ে এই কাজগুলো করব না: বিনা সম্মতিতে কারো document scan করে compromising material খোঁজা, surveillance feature build করা, AI দিয়ে false statement generate করা, automated bid- submission যেটা human review-এর জন্য থামে না।

The following are use cases we will not build, even if customers request them:

  • Scanning competitor documents (uploaded without their consent) for compromising material that could be used in negotiations or disputes
  • Generating false or misleading statements designed to deceive a procurement authority
  • Automated bid submission that bypasses human review (the human-in-the-loop principle in §4 is non-negotiable)
  • Surveillance features tracking individual users’ document opens, search queries, or chat history within a tenant for workplace-monitoring purposes
  • Synthesising signatures, seals, or other authentication artefacts on documents
  • Generating content that violates the Digital Security Act 2018 or ICT Act 2006
6

Bias and demographic effects

Bias ও fairness

এক নজরে · IN BRIEF
AI systems Bangla-তে ভালো কাজ করতে সবসময় English-এর মতো নয়। আমরা Bangla / Bangla-English mixed prompt-এ specifically tune করি, আর regional tender contexts test set করি — generic English benchmark-এ থামি না।

We acknowledge that the underlying model (Claude) was trained on a corpus where English is over-represented relative to Bangla. This creates real risks: drafts generated in Bangla may be lower-quality than English drafts, eligibility extraction from Bangla tender documents may be less accurate, and regionally-specific procurement language (district-level work- category names, locally-recognised certifications) may be poorly handled.

Our mitigations: a curated Bangladesh-tender benchmark used for regression testing on every model upgrade, post-processing rules that catch known Bangla extraction weaknesses, a feedback loop where users can flag specific outputs as low-quality so we can prioritise the underlying gap, and a commitment to publish annually on the gap between our English and Bangla output quality so customers can see the trend.

Where AI output materially under-serves a Bangla user relative to an English user on a comparable task, we treat that as a fairness defect and prioritise the fix.

7

Right to a non-AI path

AI বাদ-এর অধিকার

এক নজরে · IN BRIEF
আপনি যদি AI feature ব্যবহার না করতে চান — শুধু storage এবং eligibility-checklist tool ব্যবহার করতে চান — সেটা possible। AI-এ opt-out account settings-এ আছে।

If you prefer not to use AI features for any reason — privacy posture, legal advice from your counsel, internal policy, or personal preference — you can opt out from your account settings. With AI features off, the product still works: tender ZIP storage, manual eligibility checking against tender requirements, document organisation, audit log. The AI copilot and AI drafting are simply unavailable for that account.

Pricing for AI-off accounts reflects the reduced functionality — see the AI-off tier on the pricing page. You can switch the toggle on and off at any time.

For organisations with multiple users, the toggle can be enforced at the tenant level by the admin user, in which case individual users in that tenant cannot turn AI back on.

⚖ EXERCISING YOUR RIGHTS
Email help@tenderpulse.com.bd — we reply within 48h
Open Trust Center →