AI Policy & Governance Guidelines

AI Policy and Code of Ethics

1. Purpose

NorthWing Digital uses AI to improve quality, efficiency, and creativity in our internal operations and our client work. This policy establishes required standards for safe, ethical, transparent, and brand‑aligned use of AI across the organization. This framework reflects our core value of Trust Through Transparency, ensuring AI-driven advancements elevate our clients while upholding the highest standards of accountability.

This is a rules‑based policy, not a training document.

2. Core Commitments

NorthWing’s approach to AI is grounded in three principles:

  1. Human‑in‑the‑Loop (HIL) — All AI‑assisted work is reviewed and finalized by a qualified human.
  2. Data Protection — We do not expose client data, sensitive information, or proprietary assets to unapproved AI systems.
  3. Transparency & Trust — We use AI in a way that protects our clients’ brands, meets legal standards, and upholds NWD’s reputation.

3. Red Line Data (Prohibited Inputs)

Employees may not input the following into public AI tools:

  1. Client proprietary data, analytics, or non‑public materials
  2. Campaign performance data, proposals, or internal reporting
  3. Credentials, tokens, or access information
  4. PII, PHI, or regulated personal/medical/financial data
    Any NDA‑protected or confidential internal documentation

4. Approved Tools & Allowed Use

NorthWing maintains a vetted list of approved AI tools that staff can view at any time, though not all tools are required to be used or accessible by all staff. Only these approved tools may be used for client deliverables.

Public AI tools (such as Gemini, Claude, and ChatGPT) may be used for generalized tasks such as drafts, outlines, summaries of public information, image concepting, and productivity support. Shadow (unsanctioned) AI usage is not permitted.

5. HIL Requirements (“The AI Oreo”)

  1. Human Input — Clear instructions or prompts written by a strategist/creator.
  2. AI Draft — AI may generate ideas, drafts, variations, or QA notes.
  3. Human Output — A qualified human edits, expands, corrects, and finalizes work.

6. Accuracy, Voice, and Brand Standards

  1. Fact‑check all claims, names, data, and technical language.
  2. Remove hallucinations, invented details, or fabricated citations.
  3. Align tone with brand voice guidelines.
  4. Rewrite generic, repetitive, or low‑quality (“AI‑flavored”) output.
  5. Ensure all assets meet licensing, copyright, and accessibility standards.

7. Intellectual Property

To qualify as NWD-owned IP, a deliverable must include substantial human authorship:

  1. Raw AI output alone does not qualify as original creative work, and human involvement in prompt generation does not constitute human authorship.
  2. Al-generated visual assets are for ideation only (unless the purpose is to make a point about or with AI-generated content, in which case the deliverable needs to be clearly attributed and labeled).
  3. Final client-facing visuals must be licensed or contain substantial human modification to ensure defensible IP.

8. Transparency to Clients

  1. We disclose AI usage when requested by the client.
  2. We disclose when deliverables include substantial, unchanged AI outputs.
  3. We do not disclose internal efficiency uses unless required.

9. Accountability

Employees are responsible for ensuring their AI use complies with this policy.

  1. Misuse, including entering restricted data, bypassing HITL, or using unapproved tools, may result in disciplinary action.
  2. The final human reviewer is fully accountable for the accuracy, brand alignment, and legal compliance of all Al-assisted work.

Governance & Workflows

1. AI for Internal Use

  1. Permitted for brainstorming, research, workflow improvement, and non-delicate internal drafts.
  2. No red‑line data may be used (see Section 3 for list)
  3. AI agents or future automated systems may assist internal tasks as long as outputs undergo human review.

2. AI for Client Deliverables

NorthWing’s approach to AI is grounded in three principles:

  1. Human‑in‑the‑Loop (HIL) — All AI‑assisted work is reviewed and finalized by a qualified human.
  2. Data Protection — We do not expose client data, sensitive information, or proprietary assets to unapproved AI systems.
  3. Transparency & Trust — We use AI in a way that protects our clients’ brands, meets legal standards, and upholds NWD’s reputation.

3. AI for Client Recommendations

  1. Recommend only vetted, safe, and appropriate tools.
  2. Do not advise clients to upload sensitive data to public systems.
  3. Avoid promising outcomes AI cannot guarantee.
  4. Align all advice with NWD’s ethical and operational standards.

4. Data Protection Workflow

  1. Verify if information is public or private/protected.
  2. Confirm the tool is approved.
  3. Determine whether client consent or disclosure is required.
  4. When unsure, escalate to the AI Task Force, or COO.

5. Tool Governance Workflow (Revised for AI Task Force)

All new AI tools or use cases follow this path:
  1. Employee or manager proposes a tool.
  2. AI Task Force performs technical and risk review:
    1. Data Security: Clear assurance of non-training usage and defined data deletion policies.
    2. Compliance & IP: Verification that the tool does not claim ownership over outputs and is licensed for commercial use.
    3. Strategic Value: Does it solve a problem, save time, or provide a competitive advantage?
  3. The Task Force will determine an appropriate Lead to review each request and submit a brief to Leadership recommending approval or rejection.
  4. Leadership finalizes and communicates its decision to the Task Force Lead.
  5. If approved, a usage guide is created and the tool is added to the official list.

6. Reporting & Escalation

  1. Report unsafe, erroneous, biased, or confidential AI outputs immediately.
  2. Stop using the tool and notify your manager and the AI Task Force.
  3. Document what happened (prompt, output, tool).

To qualify as NWD-owned IP, a deliverable must include substantial human authorship:

  1. Raw AI output alone does not qualify as original creative work, and human involvement in prompt generation does not constitute human authorship.
  2. Al-generated visual assets are for ideation only (unless the purpose is to make a point about or with AI-generated content, in which case the deliverable needs to be clearly attributed and labeled).
  3. Final client-facing visuals must be licensed or contain substantial human modification to ensure defensible IP.