Skip to content

Insight · AI Governance

Your marketing team is using AI. Has anyone written down how?

A practical AI governance framework for Singapore marketing teams. IMDA, PDPC, and ASAS in plain English. The eight concrete risks, and a six-component policy you can take to your team on Monday.

Gary McRae, fractional CMO based in Singapore, PMC and CAIG accredited

By Gary McRae

12+ years APAC · PMC + CAIG accredited · Singapore

Last reviewed 29 April 2026 · 10 min read

Your marketing team is already using AI. Someone is drafting copy in ChatGPT. Someone is generating images in Midjourney. Someone has built a lookalike audience using a tool that scored customers based on behavioural data nobody knew was being scored. None of this is bad. None of it is documented either.

Singapore’s regulatory stack, IMDA, PDPC, ASAS, gives you a framework to adopt AI safely. The guidance is recent, scattered across three agencies, and marketing-specific examples are rare. This essay cuts through the confusion: the regulatory map, the eight concrete risks your team faces, and a six-component policy that fits on one page.

The Singapore regulatory stack, plain English

Five frameworks apply to your marketing team right now. Two are voluntary. Three carry teeth.

IMDA Model AI Governance Framework (Generative AI, May 2024)

Nine governance dimensions: accountability, data, trusted development, security, content provenance, safety, alignment. Voluntary. If you’re using AI at scale, this is the gold standard you’ll be measured against in any future enforcement conversation. Extended in January 2026 with an Agentic AI framework for autonomous AI systems.

PDPC Advisory Guidelines on Personal Data in AI (March 2024)

Non-binding but treated as the enforcement standard. Addresses AI recommendation and decision systems, lookalike audiences, churn prediction, automated segmentation. Requires explicit consent for using personal data to train AI models. Critical gap: the guidelines do not yet specifically cover generative AI use in marketing. Feeding customer data into ChatGPT is in a regulatory grey zone.

ASAS Code of Advertising Practice

Pre-AI but applies. Misleading or unsubstantiated claims in advertising are prohibited, including AI-generated copy that hallucinates product features. Recent FTC enforcement in the US (Operation AI Comply, September 2024) caught Rytr, Workado, and Evolv on this; the ASAS principle is the same.

Elections (Integrity of Online Advertising) Act (effective January 2025)

First Singapore legislation explicitly naming AI. Bans digitally generated or manipulated election advertising. Direct relevance for marketing teams adjacent to political consulting; tangential for most others. Penalties up to SGD 1M.

AI Verify (voluntary)

Open-source testing toolkit against eleven AI governance principles. Used by companies seeking to demonstrate governance maturity to regulators or enterprise buyers. Not mandated, useful as a credibility tool, not a compliance requirement.

Eight risk patterns your marketing team faces

These are concrete patterns, ordered by frequency rather than severity. Which of these has happened, or could happen, in your team?

  1. Customer data pasted into free ChatGPT. A team member uses free-tier ChatGPT to draft personalised email copy. Pastes 500 customer emails plus purchase history for context. OpenAI processes the data outside Singapore, may train on it.

    Why it matters: direct PDPA breach. Penalties up to the higher of SGD 1M or 10 percent of annual SG turnover. The single highest-frequency risk in SG marketing teams. The full obligation set sits in the PDPA pillar.
  2. Hallucinated product claims. ChatGPT generates marketing copy claiming a SaaS feature you don’t have. Copy goes live. Customer sees inaccuracy.

    Why it matters: ASAS violation. Complaint filed, 14-day window to correct, reputational damage. Recent FTC parallel enforcement shows regulators are alert to this category.
  3. AI-generated deepfake of real person. Midjourney or comparable generates a realistic image of a customer-testimonial face. Image goes live. The real person sees it, claims defamation.

    Why it matters: Singapore defamation law applies to AI-generated content. Injunction risk, damages, forced removal. SG GovTech reports 56 percent of businesses have experienced an audio deepfake fraud incident.
  4. Lookalike audience that discriminates. A lookalike audience built from existing customers (70 percent male, 25–45) systematically underrepresents women and older buyers despite demographic targeting being off.

    Why it matters: ASAS fairness principle and ESG/brand risk. No explicit anti-discrimination clause in SG privacy law, but the reputational exposure is real and rising.
  5. AI training without explicit consent. A churn prediction model trained on customer behavioural data. Privacy policy mentioned “analytics” but not “AI personalisation.” PDPC audit finds the gap.

    Why it matters: direct PDPA breach. Remedy: retroactive consent or data deletion + model retrain.
  6. Voice cloning without consent. A founder’s voice is used to train a voice-generation model for customer-support bot. The founder did not explicitly consent to commercial voice use.

    Why it matters: PDPA treats voice as biometric personal data. Use without consent is a breach. Specific guidance is sparse, this is a high-uncertainty zone.
  7. LLM vendor processes data outside SG. Team uses ChatGPT thinking the data stays in Singapore. Default routing sends the request to the US. No DPA in place.

    Why it matters: PDPA permits data export only with explicit consent or an approved data-protection agreement. OpenAI announced Asia data residency in April 2024; Anthropic opened a Singapore office in 2026, but neither defaults to SG processing without configuration.
  8. No audit trail for AI-generated content. A marketing piece is challenged. Team cannot produce a record of which AI model generated it, what prompt, what data was input, or who approved the output.

    Why it matters: weak governance posture in any enforcement conversation. PDPC guidance recommends written policies and documentation; their absence reads as negligence.

The pattern: seven of eight risks are about process, not about the AI itself. Which means the risk surface is operable. You can write a policy.

The Marketing AI Governance Checklist

Six components. One page. Take it back to your team and use it as draft policy.

  1. 01

    Approved AI tools (whitelist)

    List the tools approved for use: ChatGPT Enterprise (with DPA + SG data residency), Claude (Anthropic, SG residency), Midjourney for non-realistic imagery, internal company AI sandboxes. List the tools banned: any free tier, personal Gemini accounts, any tool without an enterprise DPA. For new tools, ask: (a) Is there a DPA? (b) Is SG data residency available? (c) Does the contract prohibit training on our data?

  2. 02

    Data classification, what never enters prompts

    Red (never): customer email addresses, phone numbers, payment data, health information, biometric data, trade secrets, unreleased product plans. Yellow (only with approval, only in approved tools): de-identified customer behaviour, aggregate revenue. Green (OK): public marketing copy, published research, anonymised examples. De-identified does NOT mean it's safe in free-tier tools.

  3. 03

    Review gate before publishing

    AI-generated copy involving product claims: product team review + marketing review (two approvers). AI-generated images of realistic faces: legal review + marketing review. AI-generated audience segmentation: data review for demographic fairness + marketing review. Standard: every AI-generated asset gets at least one human approval before it ships.

  4. 04

    Labelling and transparency

    No SG legal requirement for AI content labelling yet (unlike EU). ASAS principles suggest disclosure where consumer expectations matter. Best practice: internally document AI use; consider visible disclosure when realistic AI imagery could mislead; don't over-label routine AI-assisted text edits. Revisit annually as guidance evolves.

  5. 05

    Consent management

    Privacy policy explicitly lists "AI-based personalisation," "predictive model training," and "lookalike audience generation" as data uses. Get explicit consent before deploying any model trained on customer data. Audit existing data: if consent is missing, either obtain retroactive consent or exclude that data from model training. Update consent flows in product before models go live.

  6. 06

    Audit logging

    Every AI-generated asset includes: date generated, model used (with version), input prompt summary, who reviewed, approval date, where it was published. Store logs for two years minimum (aligns with PDPA expectations). Use the log to defend against regulatory inquiries, train new team members, and demonstrate governance maturity in enterprise sales conversations.

Build vs. buy: AI governance frameworks

For startups (Seed–Series A)

Build your own. Use the framework above as a starting template; customise the risk and data sections to your product. Cost: 4–8 hours of your time, or roughly SGD 2–4K in external advisory. Why: governance at your stage is about documenting what you’re already doing, not buying surveillance infrastructure.

For regulated businesses (fintech, healthcare, insurance)

Build with external guidance. Use the IMDA framework as governance baseline. Consider OneTrust or IAPP-aligned tooling if you need audit-grade documentation. Why: regulators (MAS, MOH) expect IMDA-aligned governance. External frameworks give you credibility in any enforcement conversation.

For Series B+ or complex AI deployments

Buy the tooling, or hire external advisory. AI governance software (OneTrust, Microsoft Purview, comparable) gives you inventory, monitoring, continuous compliance. Cost: SGD 50–200K/year for software plus implementation. Why: complexity justifies automation. One-off policy doesn’t scale as you add models.

Honest framing: for 80 percent of marketing teams, the framework above is sufficient. Governance is about process and documentation, not expensive software. Expensive software is for teams running dozens of models in production.

Frequently asked questions

Can I put customer emails in ChatGPT?

Depends on the tier. Free ChatGPT (personal): no, that's a PDPA violation; data goes outside SG without consent or DPA. ChatGPT Enterprise with SG data residency and a signed DPA: yes, with explicit customer consent for AI-based personalisation. The issue isn't the tool, it's the account tier and configuration. Most teams skip this distinction and end up in breach.

Do I need to label AI-generated copy in ads?

No legal requirement yet in Singapore (unlike the EU). ASAS principles suggest transparency where consumer expectations matter. Best practice: internally document that copy was AI-generated; visibly disclose when realistic AI imagery could mislead; do not over-label routine AI-assisted text edits. Revisit annually as guidance evolves.

Is lookalike audience targeting legal in Singapore?

Technically yes, but risky. Singapore privacy law has no explicit anti-discrimination clause for automated decisions. ASAS fairness principles and ESG/brand exposure are real, however. Best practice: audit lookalike audiences for demographic representation gaps before large-scale campaigns; document the audit; review the policy quarterly.

What happens if I don't have explicit consent for AI model training?

PDPC can require you to obtain consent retroactively or delete the data from the model. If consent is unattainable, deletion is the only path, which means retraining the model on a smaller dataset. The operational cost is significant; prevention is cheaper. Update privacy policies and consent flows before deploying any model on customer data.

What if my AI generates a deepfake of a real person?

Singapore defamation law applies. The person can sue for damages and demand removal. In a political context, POFMA applies, fines up to SGD 1M. Brand damage is immediate. Prevention: never generate realistic AI imagery of real people without explicit written consent; use clearly stylised or synthetic characters instead; audit your image generators' default behaviours.

Do I need to hire an external AI governance consultant?

Not necessarily. If you're seed-stage with simple AI use (ChatGPT for copy, Midjourney for stylised assets), the framework above is sufficient and you can implement it in 4–6 weeks internally. If you're regulated or running complex AI deployments (lookalike at scale, churn models in production), external advisory or governance software is justified.

Sources

About the author

Gary McRae runs MCR.AE, a fractional CMO practice for funded Seed–Series A founders and SG SMEs. CAIG-accredited (Certified in AI Governance) and PMC-accredited (Singapore Practising Management Consultant). 12+ years inside APAC marketing teams across fintech, legal tech, professional services, and regulated industries.

Find him on LinkedIn.

Apply this framework to your team.

A 30-minute discovery call. We’ll work through your current AI use, the specific PDPA exposures it creates, and what a one-page policy looks like for your operation.

Related reading

  • Fractional CMO ROI. When does a fractional CMO actually pay back? Cost ranges, ROI math, and the EDG lever for SG founders.
  • Singapore SME GTM Strategy. A five-stage GTM sequence for SG B2B. PDPA-aware outbound, government-channel access, and the order that compounds.
  • MarTech Audit Framework. Half your MarTech budget pays for tools nobody uses. A five-step audit: utilisation scoring, kill/consolidate/keep/upgrade.
  • PDPA Compliance for Marketing. Your marketing team uses personal data daily. Few have read the PDPA. Penalties up to 10 percent of annual SG turnover. The eight-step checklist that closes the gap.
  • EDG for Fractional CMO. EDG covers up to 50 percent of qualifying fractional CMO scope. The PMC accreditation rule, the worker-outcome test, the seven-step application path.

Work with this thinking