OUR SOLUTIONS
Three services,
each with a defined scope
We build document intelligence systems, conversational assistants, and conduct independent model audits. Each engagement has a fixed timeline, transparent pricing, and a written deliverable.
HOW EVERY ENGAGEMENT RUNS
Our methodology
Every engagement at Mutiara AI follows the same sequence: examine the problem, agree the scope in writing, build or review, evaluate against a defined test, document the results, and hand over. The specifics differ by service, but the discipline does not.
We take on work only where the problem is well-suited to the current state of the technology. Where it is not, we say so. This sometimes means we turn away enquiries โ that is a feature of the practice, not a limitation.
Problem scoping
We examine the problem and the data before agreeing to take the work on.
Written scope agreement
Deliverables, exclusions, timeline, and price documented before work begins.
Build or review
Development or audit work conducted with weekly written status updates.
Evaluation
System tested against a representative sample; full results shared with the client.
Handover
System, documentation, and maintenance guidance delivered to your team.
SERVICE 01
Document Intelligence System
A bespoke build engagement for firms processing substantial volumes of unstructured documents โ contracts, claims forms, invoices, regulatory filings โ seeking to reduce the manual extraction burden. Over ten weeks we examine the document set, configure and tune extraction models, and deliver a working system that classifies and parses documents alongside human review. The engagement is honest about residual error rates and includes a structured handover to the firm's internal operations team. Production deployment is included.
Key outcomes
- Reduction in manual document extraction time
- Structured data output ready for downstream systems
- Human review queue for low-confidence extractions
- Operations team able to run the system independently
Process overview โ ten-week engagement
Document set analysis and feasibility review
Model configuration and initial training
Tuning and human review workflow build
Evaluation against held-out document set
Production deployment and handover
SERVICE 02
Conversational System Build
A focused engagement to build and deploy a single, narrow conversational system for an internal use case โ staff handbook lookup, IT helpdesk routing, or customer enquiry triage. We do not claim the system will replace human judgement; we deliver a working assistant that handles the routine cases competently, hands off the rest, and is honest with users about its limits. Includes evaluation against a written test set and three months of post-launch tuning.
Key outcomes
- Routine queries handled without human involvement
- Out-of-scope queries routed to appropriate staff
- System evaluated against a written test set before launch
- Three months of post-launch tuning included
Typical use cases for this service
Handbook lookup
Employees ask questions about HR policies, benefits, or internal procedures. The system retrieves and summarises relevant sections.
IT helpdesk routing
Common IT issues are diagnosed and routed to the right team or resolved directly. Unusual issues are passed to a human agent with context.
Enquiry triage
Incoming customer enquiries are classified and routed. Simple queries are handled directly; complex ones are forwarded with a summary.
SERVICE 03
Model Audit and Risk Review
A two-week independent review of an existing machine learning system already in production within the client's organisation. We examine the training data, the evaluation regime, the monitoring in place after deployment, and the operational guardrails surrounding the model's outputs. The deliverable is a written audit report identifying material issues, their severity, and a sequenced set of remediations. Suitable for firms whose AI systems have grown faster than their governance.
What the audit covers
- Training data provenance and preparation quality
- Evaluation methodology and held-out test set adequacy
- Production monitoring: drift detection and alerting
- Operational guardrails and human oversight processes
DECISION GUIDE
Which service fits your situation
| Feature / Situation | Document Intelligence |
Conversational System |
Model Audit |
|---|---|---|---|
| You process large volumes of documents manually | โ | โ | |
| You have an internal query load handled by staff | โ | โ | |
| You have an ML model in production with limited oversight | โ | โ | |
| Needs production deployment included | โ | ||
| Requires written deliverable only (report) | โ | โ | |
| Post-launch support included | โ | 3 months | โ |
| Shortest timeline | 10 weeks | variable | 2 weeks |
| Base price (MYR) | 2,140 | 1,420 | 540 |
SHARED ACROSS ALL SERVICES
Technical and professional standards
Data handling agreement
Client data is processed under a signed data handling agreement aligned with PDPA (Malaysia). Data is not retained after the engagement closes.
Written scope before work
No development or review work begins until the scope, deliverables, and exclusions are documented and signed by both parties.
Evaluation before delivery
Every system is evaluated against a defined test set. Results are provided to the client in full, not summarised.
Handover documentation
Each engagement includes system documentation written for the internal team, not for Mutiara AI staff or a technical audience.
Monitoring guidance
Clients receive guidance on what to monitor in production, which signals indicate drift, and when a re-evaluation is advisable.
No platform commissions
Technology choices are made on the basis of the problem. We do not receive commissions from tool or infrastructure vendors.
TRANSPARENT PRICING
Base prices in MYR
SERVICE 03
Model Audit
- Two-week independent review
- Written audit report with severity ratings
- Sequenced remediation plan
- Covers data, evaluation, monitoring, guardrails
SERVICE 02
Conversational System
- Build for one internal use case
- Evaluation against written test set
- Three months post-launch tuning
- Handover with full documentation
SERVICE 01
Document Intelligence
- Ten-week engagement
- Production deployment included
- Human review workflow built in
- Written error-rate disclosure
* Base prices shown. Adjustments for larger document sets or more complex deployment environments are agreed in writing before work starts.
Not sure which service fits?
Describe the problem briefly โ the volume of documents, the query type, or the model you are concerned about โ and we will tell you which service applies, or whether the problem is outside our current scope.
Get in Touch