PRESENCE SIGNALING ARCHITECTURE WASHINGTON, DC

Govern the Machine.

The future requires human signal to overcome artificial noise. Frameworks for institutional operators governing AI-disrupted systems.

Independent · No vendor capture · No compliance theater

The Platform

Your institution is not failing because of a bad AI model. It is failing because of a broken governance structure.

Founded by Dr. Tuboise Floyd

Editor in Chief, The AI Governance Record · Host, The AI Governance Briefing with Dr. Tuboise Floyd · TAIMScore™ Certified Assessor

320K+ Tech professionals reached
Top 100 Apple Podcasts · Global Management
72 TAIM controls · GOVERN · MAP · MEASURE · MANAGE
12 Failure Files · scored, sourced, mapped

Canonical IP · Human Signal

The Canon.

Each framework is a standalone diagnostic or operational tool for institutional operators governing AI.

Analysis

The Trust Gap

Two levels of institutional AI governance failure. Structural absence. Structural insufficiency. Permitted is not the same as admissible.

Read the framework

Diagnostic

GASP™

Governance As a Structural Problem. Most institutions don't lack the right software. They never built the right structure.

Read the diagnostic

Thesis

The Workflow Thesis

Institutions deploying AI fail not because of underperforming models, but because of broken governance structures. The primary risk is never a bad model.

Read the thesis

Practice

Noise Discipline

The algorithm is rewriting your source code. Cognitive defense for operators drowning in vendor hype and feed-induced source amnesia.

Read the brief

Framework

L.E.A.C.™ Protocol

Four physical constraints every AI strategy must address — Lithography, Energy, Arbitrage, Cooling. Miss one, you are leaking value.

  • L Lithography
  • E Energy
  • A Arbitrage
  • C Cooling
Read the protocol

Architecture

PSA® · AIaPI™

Presence Signaling Architecture and AI as Presence Interface — frameworks for restoring human visibility in systems designed to observe, not listen.

Read the architecture

Signal Validation

Hyperprompt™

An emergent lexicon entry from the PSA® runtime. The fusion of a scored identity-coded signal and a prompt — a presence-optimized query.

Read the lexicon

Live Diagnostic · GASP™

Run the diagnostic.

5 questions. Instant GASP™ structural diagnosis. No login. Find out if your institution has the structure to govern AI accountably — right now.

Start

TAIMScore™ in action

Real incidents.
Scored.

Human Signal applies the TAIMScore™ framework to real AI failures on the podcast. 12 incidents. 12 TAIM controls. Every failure is a practice scenario.

FAILURE FILE 01 / 12

Accountability & Training

When your AI learns to hate — on company time.

Microsoft TAY · AIID #6

  • GOVERN 2.2
  • MEASURE 2.6
  • MANAGE 2.1

Microsoft spent $0 on adversarial input controls before releasing TAY. Within 16 hours it published racist propaganda. Four TAIM domains failed simultaneously.

FAILURE FILE 05 / 12

Privacy & Socio-Technical Design

OpenAI scraped the internet. Your data was in it.

OpenAI Class Action · AIID #561

  • MAP 1.6
  • MEASURE 2.10

A 157-page class action alleged ChatGPT was trained on private data without consent. Active exposure under HIPAA, TRAIGA, and the EU AI Act simultaneously.

FAILURE FILE 09 / 12

Bias, Fairness & Contextual Deployment

The algorithm said it was him. It wasn't.

Wrongful Arrests · AIID #74 · #896

  • MAP 1.2
  • MEASURE 2.11

Three Black men. Three wrongful arrests. Detroit Police acknowledged a 96% misidentification rate when used in isolation. Detroit settled for $300,000.

FAILURE FILE 12 / 12

Feedback Systems & Context-Appropriate Use

The condolence email that wrote itself.

Vanderbilt / ChatGPT

  • MEASURE 3.3

Vanderbilt sent a post-mass-shooting condolence email signed "Paraphrase from OpenAI's ChatGPT." No mechanism asked: must a human own these words?

April 2026 · Failure File of the Month

When your AI invents policy.

Air Canada · BC Civil Resolution Tribunal · 2024

Air Canada's chatbot told a grieving customer he could apply for a bereavement fare retroactively within 90 days. That policy didn't exist. When he submitted the claim, Air Canada denied it — and argued the chatbot was a separate legal entity not binding on the airline. The Tribunal rejected that defense and awarded the customer $812.

The precedent: you own what your AI says.

  • GOVERN 1.1
  • GOVERN 1.7
  • MANAGE 1.1
  • MANAGE 4.1
Read full analysis

Now broadcasting

The AI Governance
Briefing.

Rapid-fire episodes on AI governance, institutional risk, and finding your human value when the machine noise gets loud.

AI Governance

The Automation Paradox

Finding your human value when AI rewrites the rules. Why "leverage" is replacing judgment.

BAR Method

Break the Cycle

Background, Action, Result as a personal reinvention framework. The ultimate self-interview.

Identity

Who Are You Beneath the Noise?

Five steps to break autopilot, claim solitude, and find purpose through serving others.

Career

Market Signals, Not Just Skills

How market signals shape opportunity. Stand out by amplifying your story at the right moment.

HUMAN SIGNAL TOWN HALL · MAY 14, 2026 · 2–3 PM ET · VIRTUAL

The Strict Reality of AI Governance.

50 seats · 8 experts · No vendor scripts · One hour of unfiltered diagnosis — with your questions at the center.

SESSION STRUCTURE · 60 MIN

  • 0:00 Intro & framing — the governance reality check
  • 0:10 Panel — agentic risk, fiduciary duty, accountability debt
  • 0:35 Audience microphone — your questions, live, no filter
  • 0:55 Close + what's next

GENERAL ADMISSION

$147 / seat

  • Live virtual seat + mic access
  • On-set recording experience
  • Recording sent to all ticket holders
  • Audience mic is opt-in — never required
Reserve your seat — $147

Secure checkout via Stripe. Can't attend live? You still get the recording.

Quick reference

Common questions.

What is TAIMScore™?

TAIMScore™ — the Trusted AI Model Score — is an enterprise AI maturity and risk assessment framework developed by HISPI Project Cerebellum. It gives auditors, executives, and compliance professionals a structured methodology to score, audit, and manage an organization's AI readiness. Human Signal is an authorized affiliate partner promoting the official TAIMScore™ Assessor Workshop — virtual, hands-on, 6 CPEs.

Who is Dr. Tuboise Floyd?

Founder of Human Signal and an independent AI governance researcher. He developed the LEAC Protocol and the Noise Discipline Framework — tools for restoring human visibility and institutional signal in automated, high-noise environments. Hosts The AI Governance Briefing with Dr. Tuboise Floyd and leads the quarterly Town Hall for institutional operators.

What is the LEAC Protocol?

A macro diagnostic tool built from forensic market analysis — not a governance framework. It identifies the physical infrastructure constraints that determine which AI companies survive the infrastructure war and where value erodes.

  • L — Lithography. Control of the semiconductor supply chain (Signal: ASML).
  • E — Energy. Securing gigawatt-scale power contracts (Signal: Crusoe, Leidos).
  • A — Arbitrage. Stranded energy, flare gas, off-peak power (Signal: Lambda, CoreWeave).
  • C — Cooling. Thermodynamics is the ultimate constraint (Signal: Path Robotics, VulcanForms, Hadrian).
What is The AI Governance Briefing?

The AI Governance Briefing with Dr. Tuboise Floyd is an AI governance podcast for the Builder Class — leaders, auditors, and institutional operators navigating AI-disrupted systems. Dr. Floyd examines the physics of institutional failure, the limits of automation, and what it takes to govern the machine.

How can my organization underwrite Human Signal?

Three underwriting tiers — Signal Drop Sponsorships ($1,500 per episode), Seasonal Signal Partnerships ($6,000 per quarter), and the Signal Brief Presenting Partner package ($12,000 per quarter), which includes named sponsorship of the Quarterly Town Hall and direct introductions to Dr. Floyd's institutional network.

Visible Human Community

Take the
Visible Human
Pledge.

Failure Files™ digest, monthly session reminders, and AI governance framework updates. Join practitioners, auditors, and governance leaders who refuse to let the machine win.

Take the pledge

We do not sell your data.