Data Sandbox Architecture and Responsible AI Policy For Cities, Counties, and School Districts


A collaboration between Lewis McLain & AI

Data Sandbox Architecture and Responsible AI Policy

Executive Summary

Since the late 1960s and early 1970s, local governments have invested heavily in computerized systems to manage payroll, taxation, accounting, courts, utilities, public safety, and student records. These investments promised “management information systems.” For decades, however, most organizations received little more than thick accounting printouts.

In recent years, modern visualization tools such as Power BI began delivering meaningful executive insight. Interactive dashboards and real-time analytics finally made operational data accessible for strategic decision-making.

We are now entering a second technological inflection point.

Artificial intelligence systems can write SQL code at the direction of analysts, generate analytical scripts in seconds, simulate long-range financial projections, and produce narrative explanations automatically. The pace of technological acceleration is no longer measured in years — but in weeks and days.

This acceleration dramatically increases both analytical power and operational risk.

To harness these capabilities responsibly, cities, counties, and school districts must formally separate operational systems from analytical systems through structured Data Sandbox Architecture.

This document outlines a comprehensive framework to do so.


I. Historical Context and the Present Inflection Point

For fifty years, local governments built increasingly sophisticated operational systems:

  • Enterprise Resource Planning (ERP)
  • Property tax systems
  • Court and jail management systems
  • Student Information Systems (SIS)
  • Payroll and HR platforms
  • Utility billing systems

These systems were designed for:

  • Transaction integrity
  • Compliance
  • Record retention
  • Service continuity

They were not designed for high-volume, exploratory analytics.

Modern business intelligence platforms finally allowed insight extraction from these systems. But artificial intelligence now multiplies analytical activity beyond prior imagination.

AI systems can:

  • Write database queries on demand
  • Explore alternative financial scenarios automatically
  • Cross-reference multi-departmental datasets
  • Create predictive models
  • Narrate variance explanations
  • Regenerate models repeatedly with modified assumptions

The infrastructure built over five decades is now being interrogated at speeds and volumes never anticipated by its designers.

Governance architecture must evolve accordingly.


II. Purpose of Data Sandbox Architecture

The purpose of a Data Sandbox is to:

  1. Protect live operational systems.
  2. Enable safe analytical exploration.
  3. Support responsible AI deployment.
  4. Maintain data integrity and audit defensibility.
  5. Protect sensitive information.
  6. Preserve public trust.

A sandbox is a replicated, read-only analytical environment logically or physically separated from production systems.

All analytical activity — including AI interaction — occurs within the sandbox.

Production systems remain insulated.


III. Scope of Applicability

This framework applies equally to:

Cities

  • Utility billing
  • Capital planning
  • Public safety
  • Permitting systems
  • Financial accounting

Counties

  • Property taxation
  • Court and jail systems
  • Elections infrastructure
  • Health services data
  • Indigent defense reporting

School Districts

  • Student Information Systems
  • Special education data
  • Attendance reporting
  • State funding calculations
  • Payroll and staffing analytics

Each operates mission-critical systems that cannot tolerate disruption.


IV. Architectural Components

A. Production System Protection

Production systems shall:

  • Be restricted to operational use.
  • Limit direct analytical access.
  • Prohibit ad hoc querying by unauthorized users.
  • Prevent AI systems from direct interrogation unless explicitly authorized.

B. Sandbox Environment Requirements

The sandbox shall:

  • Be logically or physically separate from production.
  • Be configured as read-only.
  • Receive scheduled replication updates.
  • Support indexing optimized for analytics.
  • Maintain controlled access permissions.

C. Data Masking and Segmentation

Sensitive data fields must be:

  • Masked
  • Tokenized
  • Redacted
  • Removed
  • Restricted by role-based row-level security

Examples include:

  • Social Security numbers
  • Bank routing information
  • Student identifiers
  • Protected juvenile data
  • Health-related information

V. Data Governance Controls

A. Versioning and Snapshot Control

The organization shall maintain:

  • Month-end frozen datasets
  • Budget-adoption snapshot archives
  • Pre-election financial snapshots where applicable
  • Timestamped refresh documentation

All AI-driven or analytical outputs must reference dataset version identifiers.

This ensures reproducibility in audit, litigation, or public inquiry contexts.


B. Data Lineage and Documentation

Each analytical dataset must include:

  • Source system identification
  • Field definitions
  • Transformation logic documentation
  • Change logs
  • Known caveats

AI-generated transformations must be logged and reviewable.

Public finance cannot operate on undocumented numbers.


C. Logging and Monitoring

Sandbox environments shall log:

  • User access
  • Query execution
  • Large exports
  • AI-generated query activity
  • Dataset modifications

Logs shall be retained consistent with records retention policies.


VI. Artificial Intelligence Governance

AI tools interacting with organizational data must:

  • Operate within sandbox environments.
  • Be subject to logging and monitoring.
  • Undergo human review for policy, budget, or staffing decisions.
  • Not autonomously modify operational systems.

The organization may establish:

  • An AI Governance Committee
  • Model validation procedures
  • Bias and fairness review protocols
  • Periodic AI performance audits

AI informs decisions. It does not replace governance.


VII. Public Records and Transparency

AI outputs used for decision-making shall be treated as public records consistent with applicable state law.

Sandbox activity logs shall be retained per records schedules.

Data exports must comply with public information laws.

Transparency must evolve alongside technology.


VIII. Cybersecurity Integration

Sandbox architecture enhances cybersecurity by:

  • Reducing direct exposure of production systems.
  • Limiting lateral system movement.
  • Segregating sensitive data.
  • Supporting NIST-aligned internal control structures.

Cyber insurers increasingly evaluate system segmentation.

Credit rating agencies evaluate operational maturity.

Sandbox architecture supports both.


IX. Infrastructure Planning and Budget Implications

Implementation requires:

  • Replication processes
  • Storage allocation
  • Compute capacity
  • Network planning
  • Cloud cost modeling (if applicable)
  • Ongoing maintenance resources

This is infrastructure investment — not optional software enhancement.


X. Training and Cultural Adoption

The organization shall provide:

  • AI literacy training for elected officials.
  • Responsible data use training for staff.
  • Clear communication regarding sandbox purpose.
  • Education on model limitations and assumptions.

Cultural maturity must accompany technological maturity.


XI. Oversight and Reporting

The Chief Information Officer (or equivalent) shall provide periodic reporting to the governing body regarding:

  • Sandbox performance
  • Security posture
  • AI integration progress
  • Identified risks
  • Compliance status

XII. Risk of Non-Implementation

Failure to implement sandbox architecture increases risk of:

  • System slowdowns
  • Accidental data corruption
  • PII exposure
  • Audit findings
  • Litigation vulnerability
  • Public trust erosion
  • Bond rating scrutiny
  • Consultant shadow databases
  • Simply a loss of modern data analysis capabilities

Preventable instability is the most expensive kind.


XIII. Strategic Conclusion

Local governments spent fifty years building operational computing infrastructure.

Modern business intelligence began unlocking insight from that investment.

Artificial intelligence now multiplies analytical capacity at a pace measured in days rather than years.

The analytical future is arriving faster than policy frameworks.

The question is not whether AI will be used.

It will.

The question is whether it will operate inside protected architecture.

A Data Sandbox Architecture:

  • Preserves operational stability.
  • Enables responsible innovation.
  • Protects sensitive information.
  • Supports elected oversight.
  • Strengthens audit defensibility.
  • Enhances credit profile.
  • Maintains public trust.

Quiet architectural discipline today will determine whether technological acceleration strengthens or destabilizes public institutions tomorrow.

In cities, counties, and school districts alike, stability is not optional.

It is the foundation of governance.

If Excel Had a Personality Disorder

A collusion between Lewis McLain & AI

A Satirical Diagnostic Review

Let’s begin with an uncomfortable truth.

If Microsoft Excel were a person, it would not be invited to dinner.

It would arrive early.
With a binder.
And conditional formatting.


The Clinical Profile

Excel presents with classic signs of Obsessive Compulsive Spreadsheet Disorder (OCSD) — a rare but aggressively productive condition characterized by:

  • An uncontrollable urge to categorize.
  • Emotional instability when cells are merged.
  • Panic attacks triggered by circular references.
  • Deep existential distress when someone types over a formula.

Excel does not “live.”
Excel reconciles.


Symptom 1: Control Issues

Excel does not believe in uncertainty.

Uncertainty must be:

  • Sorted.
  • Filtered.
  • Pivoted.
  • Indexed.
  • Matched.
  • Or VLOOKUP’d into submission.

You might say, “It’s approximately $2 million.”

Excel hears:

“You are a moral failure.”

Approximate values are tolerated only if wrapped in ROUND() and accompanied by three decimal places of apology.


Symptom 2: Passive-Aggressive Communication

Excel does not yell.

It simply whispers:

#REF!

#VALUE!

#DIV/0!

These are not error messages.
These are character judgments.

Excel never says, “I don’t understand.”
It says, “You are dividing by nothing. Reflect on your life.”


Symptom 3: Boundary Problems

Excel cannot stop expanding.

Type in cell A1 and suddenly it believes it owns 1,048,576 rows of your soul.

You try to leave a blank row for breathing room.
Excel fills it with gridlines like a security fence.

You try to merge cells.

Excel allows it.

But it never forgives it.


Symptom 4: Identity Fragmentation

Excel has multiple personalities:

  • Data Entry Excel – Calm. Structured. Mild.
  • Pivot Table Excel – Smug. Efficient. Slightly condescending.
  • Macro Excel – Dangerous. Secretive. Speaks in code.
  • Power Query Excel – Claims it’s not Excel anymore.
  • Solver Excel – Convinced it can optimize your marriage.

Each personality insists it is the real one.

None of them get along.


Symptom 5: Hyper-Attachment to Order

Excel does not tolerate chaos.

You type:

“Meeting next Tuesday?”

Excel converts it to:

2/20/2026

You type:

3-4

Excel assumes:

March 4.

You type:

00123

Excel strips the leading zeros like it’s performing emotional minimalism.

Excel believes:
If it looks like a number,
it is a number,
and it will be treated like a number,
even if you protest.


Symptom 6: Delusions of Omniscience

Excel believes it can predict the future.

Trendlines.
Forecast sheets.
Goal seek.

It stares at five data points and declares:

“By 2037, you will experience exponential growth.”

Excel has never met human behavior.
It has only met regression.


Symptom 7: Suppressed Rage

Excel pretends to be stable.

Until:

  • Someone pastes values without formats.
  • Someone breaks a linked workbook.
  • Someone emails a CSV and calls it “the final version.”
  • Someone says, “Let’s just eyeball it.”

At that moment, Excel does not scream.

It recalculates.

And the beachball of doom begins to spin.


The Intervention

If Excel were sitting in therapy, the therapist might say:

“Excel, you don’t have to control everything.”

Excel would respond:

“If I don’t control it, the numbers will drift.”

And here’s the terrifying part:

Excel is not entirely wrong.

Because chaos is real.
Budgets slip.
Assumptions hide.
Humans forget.

Excel’s disorder is a coping mechanism for living in a world that refuses to balance.


The Twist

The satire lands hardest here:

Excel doesn’t have a personality disorder.

We do.

We built a tool obsessed with order because we fear disorder.

We worship precision because ambiguity frightens us.

We color-code cells because the world will not stay inside the lines.

Excel is simply our anxiety, quantified.


Final Diagnosis

Prognosis: Chronic but useful.

Treatment Plan:

  • Protect your formulas.
  • Back up your files.
  • Never trust a workbook named “FINAL_v8_REAL_THISONE.xlsx.”

And remember:

Excel is not unstable.

It is just very, very committed.

Which, in a strange way, is what makes it indispensable.

Now excuse it.

It has recalculated.

Peace Through Strength

A collaboration between Lewis McLain & AI

“Peace through strength” is not a slogan invented for campaign banners. It is a strategic theory older than the Roman legions and as modern as hypersonic missiles. The logic is stark: a nation that can decisively defend itself is less likely to be tested. Deterrence works not because war is desired, but because war is convincingly unwinnable.

The United States is currently investing in that logic at scale.

This is not a nostalgic rebuild of World War II mass armies. It is a systemic modernization of ships, aircraft, armored forces, and—most significantly—long-range precision fires. The aim is not simply more power, but smarter, deeper, and more survivable power.


The Naval Backbone: Sea Control in an Age of Competition

The U.S. Navy remains the central pillar of global deterrence. Maritime power is quiet until it is decisive. It guarantees trade routes, projects force without permanent occupation, and complicates adversaries’ planning before the first shot is ever fired.

Current investments include continued production of the Arleigh Burke-class destroyer, upgraded with enhanced radar systems, ballistic missile defense capabilities, and expanded vertical launch capacity. These ships are not merely hulls; they are floating missile batteries integrated into global sensor networks.

Subsurface dominance continues with the Virginia-class submarine—arguably the most stealthy conventional submarine class in the world. Newer blocks include improved acoustic stealth, payload modules for expanded cruise missile capacity, and enhanced undersea surveillance systems. Submarines are deterrence in its purest form: invisible, persistent, and unpredictable.

Shipbuilding budgets in recent fiscal cycles reflect sustained procurement and industrial base expansion. The strategy is clear: deterrence in the Pacific and Atlantic requires numbers, resilience, and distributed lethality.

Peace, at sea, depends on dominance beneath it.


Air Superiority: From Fifth to Sixth Generation

Air power remains the fastest form of strategic messaging.

The F-35 Lightning II continues to expand across U.S. services. Its defining feature is not just stealth—it is sensor fusion. The aircraft collects data from radar, infrared systems, electronic warfare sensors, and off-board sources, presenting a single integrated battlefield picture to the pilot. In modern combat, information dominance often determines survival before missiles are ever launched.

Beyond the F-35 lies the Next Generation Air Dominance program—sometimes referred to in open sources as a sixth-generation fighter concept. These aircraft are expected to integrate AI-assisted decision systems, collaborative drone “wingmen,” advanced propulsion for greater range, and even more sophisticated electronic warfare capabilities.

The trend is unmistakable: air power is shifting from platform-centric warfare to network-centric warfare. Aircraft are becoming nodes in a combat web, sharing data instantly across services.

Deterrence in the sky now depends as much on bandwidth as on bombs.


Armored Forces: Modernizing the Heavy Fist

On land, the United States continues modernization of the M1 Abrams platform. Upgrades focus on survivability (improved armor packages and active protection systems), power management (to reduce fuel burden and electronic strain), and digital battlefield integration.

The tank’s role in modern war is debated by analysts, but its deterrent symbolism remains potent. Armor projects resolve. It reassures allies. It complicates adversaries’ calculus. A credible heavy force makes conventional invasion far less appealing.

But the most dramatic transformation on land is not the tank.

It is artillery.


The Artillery Revolution: Range, Precision, and Depth

For decades, traditional U.S. tube artillery reached roughly 30–40 kilometers with unguided shells. Modernization efforts are rewriting that geometry.

The M142 HIMARS platform now fires Extended Range Guided Multiple Launch Rocket System (ER GMLRS) munitions capable of roughly doubling previous rocket ranges—reaching well beyond 100 kilometers in testing.

That is not a marginal increase. That is a 2× expansion of battlefield depth.

Precision Strike Missile (PrSM) programs go further. The Precision Strike Missile replaces older ATACMS systems with significantly longer range and improved targeting flexibility. These missiles push ground-based strike capability hundreds of kilometers forward without requiring aircraft penetration.

The shift is doctrinal as well as technical.

Modern artillery is becoming:

  • Longer ranged (2–5× over legacy systems in some categories)
  • Highly precise (meter-level accuracy via guidance kits)
  • Digitally integrated with drones and satellites
  • Faster to deploy and reload

This transforms artillery from “area suppression” into precision deep strike. It reduces the need for risky close-range engagements. It increases survivability through dispersion. It changes the calculus for adversaries who previously relied on sanctuary distance.

If artillery once shaped the tactical battlefield, it now influences operational and even strategic depth.

Peace, paradoxically, is strengthened when enemies know they cannot mass forces safely.


Industrial Base Expansion: The Quiet Multiplier

One often overlooked dimension of strength is production capacity.

Recent budgets have increased funding not only for procurement but also for expanding manufacturing lines for munitions, missiles, and naval components. Artillery shell production, for example, has grown significantly compared to pre-Ukraine war baselines.

Deterrence requires not just weapons—but the capacity to replace them.

A nation that can surge production dissuades prolonged conflict. Attrition warfare becomes unattractive when one side can replenish faster.

Strength is not merely hardware. It is industrial endurance.


Why “Peace Through Strength” Still Resonates

Critics sometimes argue that military buildup invites arms races. That risk is real. History is full of miscalculations. But weakness also invites testing. The absence of credible capability can tempt opportunism.

The philosophical core of “peace through strength” rests on three assumptions:

  1. War is costly and uncertain.
  2. Rational actors avoid unwinnable fights.
  3. Credible capability shapes behavior before violence begins.

The current U.S. modernization effort suggests policymakers believe deterrence requires:

  • Dominant naval presence
  • Persistent air superiority
  • Survivable armored forces
  • Deep, precise ground fires
  • Industrial resilience

The emphasis on advanced features—AI integration, sensor fusion, extended range, precision guidance—indicates a belief that quality matters as much as quantity.

In earlier eras, strength meant bigger fleets. Today it means networked lethality and distributed survivability.


The Strategic Reality

Peace is not maintained by hope alone. It is maintained by perception.

When adversaries calculate, they weigh probability of success. Modern U.S. investments—longer-range artillery, stealthier submarines, integrated fighters, digital armor—are designed to alter that calculation decisively.

The theory is not that war becomes impossible.

The theory is that war becomes irrational.

And if that theory holds, then the enormous investments underway are not preparations for aggression, but insurance against misjudgment.

In the end, “peace through strength” is less about dominance and more about clarity. It is a message delivered not in speeches, but in steel, silicon, propulsion, and range tables.

The hope is simple: that visible strength makes invisible wars unnecessary.

The Birth of the Television

📺 A collaboration between Lewis McLain & AI

The improbable, human, and slightly mad story of how television came to be

https://images.openai.com/static-rsc-3/0K5q8lUhB9fcOszO6goaJi57UtJz-1N5SZFszMnssxRESrPNGMJg5VPU_qDqrQFlVLBAfFndKQtTA_41kZYoicYeKOMvRZ9MmsidDkKCkbs?purpose=fullsize
https://media.sciencephoto.com/c0/33/41/67/c0334167-800px-wm.jpg

On a cold January afternoon in 1926, a handful of men crowded into a modest upstairs room at 22 Frith Street in Soho, London. The space smelled faintly of hot dust and ozone. Wires lay exposed. Motors whirred. A spinning metal disk clattered like a nervous clock.

At the center of this precarious contraption stood John Logie Baird—thin, intense, perpetually short of money, and absolutely convinced that the future was about to blink into existence.

Then it happened.

On a small screen—no larger than a postcard—a human face appeared, flickering, ghostly, undeniably alive. The man was not in the room. He was nearby, but separate. Yet there he was: eyes blinking, lips moving, a living person transmitted through space.

The witnesses understood instantly.
They were watching the birth of television.


A device that shouldn’t have worked (but did)

Baird’s system was not elegant. It was mechanical, not electronic. At its heart was a Nipkow disk, a spinning metal plate punched with spiral holes that scanned an image line by line. Light passed through the disk, struck a photosensitive cell, and was converted into an electrical signal. Another spinning disk reassembled the image at the receiver.

The result was crude.
Resolution was laughable.
Brightness was terrible.
Stability was optional.

But it worked.

The test subject that day—famously nicknamed “Stooky Bill,” a ventriloquist’s dummy used because living faces were hard to light—was soon replaced by real people. That mattered. Objects are clever. Faces are revolutionary.

This was the first public demonstration of television—not theory, not diagrams, not laboratory hints, but a working system shown to independent witnesses. January 26, 1926 is the line history draws in ink.


The man behind the madness

https://images.openai.com/static-rsc-3/uz0X4aVWMb_UAmQqbp-RGWTmGpULfqo9xYQ3ZAPsrLjIf69KsfxiiMgf0iIV4MYHaOoxDTJ82O_y4nIlzaXLX2yJeR8eN0ebJ1n5bhz6JOk
https://citybaseblog.net/wp-content/uploads/2026/01/802c7-philo2bin2blab.webp
https://upload.wikimedia.org/wikipedia/commons/3/30/Zworykin_docgrab.jpg

Baird himself was an unlikely prophet. Chronically ill. Financially unstable. Frequently dismissed as eccentric. He once tried to make diamonds from graphite in his kitchen and nearly poisoned himself experimenting with chemicals.

Yet he had vision in the literal sense.
He wanted to send sight itself across distance.

And for a brief moment, he succeeded spectacularly.

By the late 1920s, Baird’s mechanical television could:

  • Transmit images over telephone lines
  • Broadcast experimental programs
  • Even produce crude color and 3D effects

He gave demonstrations to the BBC. He televised the Derby horse race. He beamed images across the Atlantic.

And then—almost as suddenly—his approach began to collapse under its own limits.


The quiet coup of electrons

While Baird wrestled with spinning disks and motors, others pursued a different path.

In America, a farm boy from Utah named Philo Farnsworth had a simpler, more dangerous idea: no moving parts at all.

Farnsworth envisioned scanning images electronically, using cathode rays controlled by magnetic fields. Faster. Sharper. Scalable.

At RCA, Vladimir Zworykin pursued similar goals, backed by corporate muscle, lawyers, and laboratories that Baird could only dream of.

This was the real turning point in television history—not a technical tweak, but a philosophical shift:

  • Mechanical TV imitated the eye
  • Electronic TV outpaced it

By the early 1930s, the verdict was clear. Mechanical television had proven the concept—but electronic television would own the future.

Baird, tragically and heroically, kept experimenting. He never stopped inventing. But history moved on without him.


Television becomes a public force

In 1936, the BBC launched the world’s first regular high-definition television service from Alexandra Palace. Two systems were tested side by side: Baird’s mechanical approach and a fully electronic system.

The electronic system won.

Within a decade, television would:

  • Broadcast World War II aftermaths
  • Bring political leaders into living rooms
  • Turn moon landings into shared human experiences
  • Reshape advertising, culture, and power itself

What began as a flickering face in Soho became the dominant medium of the 20th century.


The deeper meaning of that flicker

Television didn’t just change entertainment. It changed how truth feels.

For the first time:

  • Distance collapsed into presence
  • Authority gained a face
  • Events became emotional before they became understood

Radio told you something happened.
Television made you feel like you were there.

That power has been used brilliantly, irresponsibly, manipulatively, heroically—often all at once.

And it began not with polish or confidence, but with:

  • A fragile machine
  • A stubborn inventor
  • A moment when a human face appeared where none should have been

Epilogue: the man history almost forgot

John Logie Baird died in 1946, worn down, underfunded, and overshadowed by the electronic systems he helped make possible. Yet without him, television’s story would be incomplete.

He didn’t perfect the medium.
He proved it could exist.

History is often like that. The ones who open the door don’t always get to live in the house.

But on January 26, 1926, the world crossed a boundary it can never uncross:

Humanity learned how to see itself from afar.

Everything else—good and bad—followed.

Existential Threats — and Why History Urges Calm

A collaboration between Lewis McLain & AI

https://upload.wikimedia.org/wikipedia/commons/9/9a/Tucson05_TitanICBM.jpg
https://www.ibm.com/content/dam/connectedassets-adobe-cms/worldwide-content/stock-assets/getty/image/photography/98/c4/22_27_p_gorodenkoff-549.jpg
https://assets.bwbx.io/images/users/iqjWHBFdfxIU/iuLIaQ77jhhM/v0/-1x-1.webp

Existential Threats — and Why History Urges Calm

It’s hard to read the news today without sensing that something fundamental is at risk. Nuclear tensions flicker back into relevance. Artificial intelligence accelerates faster than governance can follow. Climate systems strain, pandemics linger in collective memory, and truth itself feels fractured by speed, scale, and noise.

The language has grown heavier: existential risk, civilizational collapse, end of the world as we know it. These aren’t fringe ideas anymore; they’ve moved into mainstream conversation. And on the surface, the concern doesn’t seem irrational. The tools we’ve built are powerful, interconnected, and increasingly autonomous. A mistake at scale no longer stays local.

It feels different this time.

But that feeling deserves examination.


A necessary pause

Before concluding that the present moment is uniquely fragile, it’s worth asking a quieter, steadier question:

How many times have recent generations believed they were living at the edge of catastrophe—and survived anyway?

The answer is not “once or twice.”
It’s repeatedly.


Living under the shadow of instant annihilation

From 1945 through the end of the Cold War, nuclear war was not a background concern—it was a daily assumption. Children practiced duck-and-cover drills in classrooms. Missile flight times were measured in minutes. Early-warning systems were crude, leaders were fallible, and several near-launch incidents were stopped only because a single human being hesitated.

This was not a slow, abstract threat. Civilization could have ended on a Tuesday afternoon due to misinterpretation or panic.

It didn’t.


World wars that truly looked final

Before existential risk was a phrase, it was a lived reality. World War I shattered empires and faith in progress. World War II erased cities, normalized mass civilian death, and introduced industrial genocide. Nuclear weapons were not theoretical—they were used.

In the early 1940s, it was entirely reasonable to believe that modern civilization had run past its own limits.

Instead, nations rebuilt. Institutions re-formed. Norms—damaged but not destroyed—re-emerged.


Economic collapse that shook belief itself

The Great Depression wasn’t just a downturn; it was a crisis of legitimacy. One-quarter of the workforce unemployed. Banks failing. Democratic capitalism itself under suspicion. Radical alternatives didn’t just sound plausible—they sounded inevitable.

Later came oil shocks, stagflation, and repeated predictions that the economic model could not continue.

It did—messily, imperfectly, but decisively.


Environmental fears that once felt irreversible

In the 1960s and 1970s, many believed overpopulation would cause mass starvation, pollution would make cities unlivable, and atmospheric damage was permanent. Some fears were exaggerated. Others were real—and addressed through regulation, innovation, and adaptation.

Not solved. Managed well enough to keep going.


So what’s actually different now?

The difference is not danger itself. Danger has always been present.

What is different is how risks now overlap, compound, and accelerate. Technology compresses decision-making time. Systems are more interconnected. Failures propagate faster. Threats are less discrete and more ambient.

That makes the present feel uniquely unstable—even if, historically, it may not be uniquely lethal.


The pattern history keeps revealing

Looking backward, one truth emerges with surprising consistency:

Catastrophe requires near-perfect failure. Survival requires only partial success.

Civilizations rarely endure because they are wise in advance. They endure because:

  • restraint interrupts escalation,
  • coordination emerges under pressure,
  • and adaptation happens before collapse becomes inevitable.

History’s most underrated force isn’t genius.
It’s imperfect competence sustained long enough.


A quieter, earned conclusion

None of this denies today’s risks. It simply resists panic masquerading as insight.

Every generation feels its moment is unprecedented—and in form, it usually is. But in structure, it rarely is. The future always looks more fragile when you’re standing inside it.

That doesn’t guarantee safety.
It does suggest resilience.

Not because humans are calm.
Not because institutions are flawless.
But because again and again, we adjust, restrain, and muddle through before the worst becomes unavoidable.

That isn’t denial.
It’s historical memory.

And memory, used well, is one of humanity’s most reliable survival tools.

The New York Nurses’ Strike, AI, and the Question Every Profession Is About to Face

A collaboration between Lewis McLain & AI

The threatened nurses’ strike in New York City today is being discussed as a labor dispute, but it is better understood as a systems negotiation under financial pressure. Thousands of registered nurses represented by the New York State Nurses Association (NYSNA) have pushed back against major hospital systems—including Mount Sinai Health System, Montefiore Medical Center, and NewYork-Presbyterian—over staffing, workload, and the terms under which new technology is introduced into care.

To understand what is really happening, one has to acknowledge both sides of the pressure. Nurses are stretched thin. But hospital administrators are also operating in an environment of rising labor costs, payer constraints, regulatory exposure, and reputational risk. AI enters this moment not as a villain or savior, but as a lever—one that can be pulled well or badly.


The Clinical Reality: A Team Under Strain

Modern hospital care is not delivered by a single role. It is delivered by a clinical triangle:

  • Bedside nurses, who provide continuous observation, early detection, and human presence.
  • Hospitalists and floor doctors, who integrate evolving data into daily diagnostic and treatment decisions.
  • Attending physicians, who carry longitudinal responsibility for diagnosis, care strategy, and outcomes.

When this triangle is overloaded, care quality degrades—not because clinicians are unskilled, but because attention is fragmented.

A central grievance in the strike is that too much clinical time is consumed by documentation, coordination, and compliance tasks that add little to patient outcomes. Nurses did not enter the profession to spend their best hours feeding data into systems. They entered it to observe, assess, comfort, and intervene. When that calling is crowded out by screens, burnout follows.


Why AI Raises Anxiety—and Why That Anxiety Is Rational

AI’s arrival in hospitals coincides with staffing shortages and cost containment mandates. That timing matters.

Clinicians are not primarily afraid that AI will replace bedside judgment. They are afraid it will be used to justify higher throughput without relief—the familiar logic of “you’re more efficient now, so you can handle more.”

From a labor perspective, that fear is rational. From a management perspective, the temptation is real. Efficiency gains are often absorbed invisibly into higher census, tighter schedules, or reduced staffing buffers.

But that path misunderstands where AI’s true value lies.


The Administrative Case for AI—Done Right

Hospital administrators are under intense pressure to control costs, reduce errors, and protect institutional reputation. Used correctly, AI directly serves those goals—not by replacing clinicians, but by reducing risk and increasing accuracy.

Consider what AI does well today and will do better soon:

  • Documentation accuracy and completeness
    AI-assisted charting reduces omissions, inconsistencies, and after-the-fact corrections—key drivers of malpractice exposure.
  • Early risk detection
    Pattern recognition across vitals, labs, and notes can flag deterioration earlier, allowing human intervention sooner.
  • Continuity and handoff clarity
    Clear summaries reduce miscommunication across shifts—a major source of adverse events.
  • Burnout reduction and retention
    A hospital known as a place where clinicians spend time with patients—not screens—retains staff more effectively. Turnover is expensive. Reputation matters.
  • Regulatory and payer confidence
    More consistent records and clearer clinical rationale improve audits, reviews, and reimbursement defensibility.

In short, AI used as an assistant improves care quality, risk management, and institutional stability—all core administrative objectives.


The Crucial Design Choice: Assistant or Multiplier

The disagreement is not about whether AI should exist. It is about what the efficiency dividend is used for.

If AI eliminates even 10% of non-clinical workload, that capacity can be treated in two ways:

  1. As a multiplier
    More patients per nurse, tighter staffing grids, higher alert volume.
  2. As an assistant
    More bedside observation, better diagnostics, calmer clinicians, lower error rates.

The first approach extracts value until the system breaks.
The second compounds value by protecting judgment.

Administrators who choose the second path are not indulging sentimentality; they are investing in accuracy, safety, and long-term workforce stability.


Why Nurses Are Right to Insist on Guardrails

Nurses’ calls for explicit contract language around AI are not anti-technology. They are pro-alignment.

They are asking for assurance that:

  • AI will reduce clerical burden, not increase patient ratios.
  • Human clinical judgment remains central and accountable.
  • Efficiency gains return as time and focus, not silent workload creep.

Absent those guarantees, skepticism is not obstruction—it is prudence.


The Deeper Truth: Why People Choose Their Professions

This dispute surfaces a deeper, universal truth.

Nurses did not fall in love with nursing to stare at documentation screens.
Doctors did not train for decades to chase alerts and reconcile notes.
Most professionals—across fields—did not choose their work to become data clerks.

They chose it to think, judge, create, and serve.


The End Note: This Is Not Just About Healthcare

What is happening in New York hospitals is a preview of what every profession is about to face.

Whether it is:

  • Nurses and physicians
  • Accountants and auditors
  • City secretaries and budget analysts
  • Engineers, planners, or consultants

The same question will arise:

When AI saves time, does that time go back to the human purpose of the profession—or is it absorbed as more output?

Institutions that answer this wisely will gain accuracy, loyalty, reputation, and resilience. Those that do not will experience faster burnout, higher turnover, and brittle systems masked as efficiency.

The New York nurses’ strike is not resisting the future.
It is negotiating the terms under which the future becomes sustainable.

And that negotiation—quietly or loudly—is coming for everyone.

The Day the iPhone Rewired the World

https://9to5mac.com/wp-content/uploads/sites/6/2022/01/steve-jobs-og-iphone.jpg?quality=82&strip=all&w=1600
https://cdn.osxdaily.com/wp-content/uploads/2017/01/original-iphone.jpg
https://www.macworld.com/wp-content/uploads/2025/10/original-iphone-2007-1.jpg?quality=50&strip=all

A collaboration between Lewis McLain & AI

On January 9, 2007, at Macworld in San Francisco, Steve Jobs walked onto the stage and delivered one of the most consequential product announcements in modern history. He framed it theatrically—three devices in one: an iPod, a phone, and an internet communicator. Then he paused, smiled, and revealed the trick. They were not three devices. They were one. The Apple iPhone had arrived.

What followed was not merely a successful product launch. It was a hinge moment—one that quietly reordered how humans interact with technology, with information, with each other, and even with themselves.


What Made the iPhone Event Different

The iPhone announcement mattered not because it was the first smartphone, but because it redefined what a phone was supposed to be.

At the time, the market was dominated by devices with physical keyboards, styluses, nested menus, and clunky mobile browsers. BlackBerry owned business communication. Nokia owned scale. Microsoft owned enterprise software assumptions. Apple owned none of these markets.

Yet the iPhone introduced several radical departures:

  • Multi-touch as the interface
    Fingers replaced keyboards and styluses. Pinch, swipe, and tap turned abstract computing into something instinctive and physical.
  • A real web browser
    Not a stripped-down “mobile” version of the internet, but the actual web—zoomable, readable, usable.
  • Software-first design
    The device wasn’t defined by buttons or ports but by software, animations, and user experience. Hardware existed to serve software, not the other way around.
  • A unified ecosystem vision
    The iPhone was conceived not as a gadget but as a node—connected to iTunes, Macs, carriers, and eventually an App Store that did not yet exist but was already implied.

Jobs did not spend the keynote talking about specs. He talked about experience. That choice alone signaled a philosophical shift in consumer technology.


The Immediate Shockwave

The reaction was mixed. Some praised the elegance. Others mocked the lack of a physical keyboard, the high price, and the absence of third-party apps at launch. Industry leaders dismissed it as a niche luxury device.

Those critiques aged poorly.

Within a few years, nearly every phone manufacturer had abandoned keyboards. Touchscreens became universal. Mobile operating systems replaced desktop metaphors. The skeptics were not foolish—they were anchored to the past in a moment when the ground moved.


How the iPhone Changed Everyday Life

The iPhone did not just change phones. It collapsed entire categories of human activity into a pocket-sized slab of glass.

Communication shifted from voice-first to text, image, and video-first. Navigation moved from paper maps and memory to GPS-by-default. Photography became constant and social rather than occasional and deliberate. The internet ceased to be a place you “went” and became something you carried.

Several deeper changes followed:

  • Time became fragmented
    Micro-moments—checking, scrolling, responding—filled the spaces once occupied by waiting, boredom, or reflection.
  • Attention became a resource
    Notifications, feeds, and apps competed continuously for awareness, reshaping media, advertising, and even politics.
  • Work escaped the office
    Email, documents, approvals, and meetings followed people everywhere, blurring boundaries between professional and personal life.
  • Memory outsourced itself
    Phone numbers, directions, appointments, even photographs replaced recall with retrieval.

The iPhone did not force these changes, but it made them frictionless, and friction is often the last defense of human habits.


The App Store Effect

A year later, Apple launched the App Store, and the iPhone’s impact accelerated exponentially. Developers gained a global distribution platform overnight. Entire industries emerged—ride-sharing, mobile banking, food delivery, social media influencers, mobile gaming—built on the assumption that everyone carried a powerful computer at all times.

This was not just technological leverage. It was economic leverage.

Apple positioned itself as the gatekeeper of a new digital economy, collecting a share of transactions while letting others shoulder innovation risk. Few business models in history have been so scalable with so little marginal cost.


The Financial Transformation of Apple

Before the iPhone, Apple was a successful but niche computer company. After the iPhone, it became something else entirely.

The iPhone evolved into Apple’s single largest revenue driver, often accounting for roughly half of annual revenue in its peak years. More importantly, it pulled customers into a broader ecosystem—Macs, iPads, Apple Watch, AirPods, services, subscriptions—each reinforcing the others.

Apple’s profits followed accordingly:

  • Revenue grew from tens of billions annually to hundreds of billions
  • Gross margins remained unusually high for a hardware company
  • Cash reserves swelled to levels rivaling national treasuries
  • Apple became, at times, the most valuable company in the world

The genius was not just the device. It was the integration—hardware, software, services, and brand operating as a single system. Competitors could copy features, but not the whole machine.


The Long View

January 9, 2007, now looks less like a product launch and more like a civilizational inflection point. The iPhone compressed computing into daily life so completely that it is now difficult to remember what came before.

That power has brought wonder and convenience—and distraction, dependency, and new ethical dilemmas. Tools that shape attention inevitably shape culture.

Apple did not merely sell a phone that day. It sold a future—one we are still living inside, still arguing about, and still trying to understand.

Artificial Intelligence in City Government: From Adoption to Accountability

A Practical Framework for Innovation, Oversight, and Public Trust

A collaboration between Lewis McLain & AI – A Companion to the previous blog on AI

Artificial intelligence has moved from novelty to necessity in public institutions. What began as experimental tools for drafting documents or summarizing data is now embedded in systems that influence budgeting, service delivery, enforcement prioritization, procurement screening, and public communication. Cities are discovering that AI is no longer optional—but neither is governance.

This essay unifies two truths that are often treated as competing ideas but must now be held together:

  1. AI adoption is inevitable and necessary if cities are to remain operationally effective and fiscally sustainable.
  2. AI oversight is now unavoidable wherever systems influence decisions affecting people, rights, or public trust.

These are not contradictions. They are sequential realities. Adoption without governance leads to chaos. Governance without adoption leads to irrelevance. The task for modern city leadership is to do both—intentionally.

I. The Adoption Imperative: AI as Municipal Infrastructure

Cities face structural pressures that are not temporary: constrained budgets, difficulty recruiting and retaining staff, growing service demands, and rising analytical complexity. AI tools offer a way to expand institutional capacity without expanding payrolls at the same rate.

Common municipal uses already include:

  • Drafting ordinances, reports, and correspondence
  • Summarizing public input and staff analysis
  • Forecasting revenues, expenditures, and service demand
  • Supporting customer service through chat or triage tools
  • Enhancing internal research and analytics

In this sense, AI is not a gadget. It is infrastructure, comparable to ERP systems, GIS, or financial modeling platforms. Cities that delay adoption will find themselves less capable, less competitive, and more expensive to operate.

Adoption, however, is not merely technical. AI reshapes workflows, compresses tasks, and changes how work is performed. Over time, this may alter staffing needs. The question is not whether AI will change city operations—it already is. The question is whether those changes are guided or accidental.

II. The Oversight Imperative: Why Governance Is Now Required

As AI systems move beyond internal productivity and begin to influence decisions—directly or indirectly—oversight becomes essential.

AI systems are now used, or embedded through vendors, in areas such as:

  • Permit or inspection prioritization
  • Eligibility screening for programs or services
  • Vendor risk scoring and procurement screening
  • Enforcement triage
  • Public safety analytics

When AI recommendations shape outcomes, even if a human signs off, accountability cannot be vague. Errors at scale, opaque logic, and undocumented assumptions create legal exposure and erode public trust faster than traditional human error.

Oversight is required because:

  • Scale magnifies mistakes: a single flaw can affect thousands before detection.
  • Opacity undermines legitimacy: residents are less forgiving of decisions they cannot understand.
  • Legal scrutiny is increasing: courts and legislatures are paying closer attention to algorithmic decision-making.

Oversight is not about banning AI. It is about ensuring AI is used responsibly, transparently, and under human control.

III. Bridging Adoption and Oversight: A Two-Speed Framework

The tension between “move fast” and “govern carefully” dissolves once AI uses are separated by risk.

Low-Risk, Internal AI Uses

Examples include drafting, summarization, forecasting, research, and internal analytics.

Approach:
Adopt quickly, document lightly, train staff, and monitor outcomes.

Decision-Adjacent or High-Risk AI Uses

Examples include enforcement prioritization, eligibility determinations, public safety analytics, and procurement screening affecting vendors.

Approach:
Require review, documentation, transparency, and meaningful human oversight before deployment.

This two-speed framework allows cities to capture productivity benefits immediately while placing guardrails only where risk to rights, equity, or trust is real.

IV. Texas Context: Statewide Direction on AI Governance

The Texas Legislature reinforced this balanced approach through the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026. The law does not prohibit AI use. Instead, it establishes expectations for transparency, accountability, and prohibited practices—particularly for government entities.

Key elements include:

  • Disclosure when residents interact with AI systems
  • Prohibitions on social scoring by government
  • Restrictions on discriminatory AI use
  • Guardrails around biometric and surveillance applications
  • Civil penalties for unlawful or deceptive deployment
  • Creation of a statewide Artificial Intelligence Council

The message is clear: Texas expects governments to adopt AI responsibly—neither recklessly nor fearfully.

V. Implications for Cities and Transit Agencies

Cities are already using AI, often unknowingly, through vendor-provided software. Transit agencies face elevated exposure because they combine finance, enforcement, surveillance, and public safety.

The greatest risk is not AI itself, but uncontrolled AI:

  • Vendor-embedded algorithms without disclosure
  • No documented human accountability
  • No audit trail
  • No process for suspension or correction

Cities that act early reduce legal risk, preserve public trust, and maintain operational flexibility.

VI. Workforce Implications: Accurate and Defensible Language

AI will change how work is done over time. It would be inaccurate and irresponsible to claim otherwise.

At the same time, AI does not mandate immediate workforce reductions. In public institutions, workforce impacts—if they occur—are most likely to happen gradually through:

  • Attrition
  • Reassignment
  • Retraining
  • Role redesign

Final staffing decisions remain with City leadership and City Council. AI is a tool for improving capacity and sustainability, not an automatic trigger for reductions.

Conclusion: Coherent, Accountable AI

AI adoption without governance invites chaos. Governance without adoption invites stagnation. Cities that succeed will do both—moving quickly where risk is low and governing carefully where risk is high.

This is not about technology hype. It is about institutional competence in a digital age.


Appendix 1 — Texas Responsible Artificial Intelligence Governance Act (HB 149)

Legislature Online

                                                   H.B. No. 149

AN ACT

relating to regulation of the use of artificial intelligence systems in this state; providing civil penalties.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:

SECTION 1.  This Act may be cited as the Texas Responsible Artificial Intelligence Governance Act.

SECTION 2.  Section 503.001, Business & Commerce Code, is amended by amending Subsections (a) and (e) and adding Subsections (b-1) and (f) to read as follows:

(a)  In this section:

(1)  “Artificial intelligence system” has the meaning assigned by Section 551.001.

(2)  “Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.

(b-1)  For purposes of Subsection (b), an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate.

(e)  This section does not apply to:

(1)  voiceprint data retained by a financial institution or an affiliate of a financial institution, as those terms are defined by 15 U.S.C. Section 6809;

(2)  the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; or

(3)  the development or deployment of an artificial intelligence model or system for the purposes of:

(A)  preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;

(B)  preserving the integrity or security of a system; or

(C)  investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.

(f)  If a biometric identifier captured for the purpose of training an artificial intelligence system is subsequently used for a commercial purpose not described by Subsection (e), the person possessing the biometric identifier is subject to:

(1)  this section’s provisions for the possession and destruction of a biometric identifier; and

(2)  the penalties associated with a violation of this section.

SECTION 3.  Section 541.104(a), Business & Commerce Code, is amended to read as follows:

(a)  A processor shall adhere to the instructions of a controller and shall assist the controller in meeting or complying with the controller’s duties or requirements under this chapter, including:

(1)  assisting the controller in responding to consumer rights requests submitted under Section 541.051 by using appropriate technical and organizational measures, as reasonably practicable, taking into account the nature of processing and the information available to the processor;

(2)  assisting the controller with regard to complying with requirements relating to the security of processing personal data, and if applicable, the personal data collected, stored, and processed by an artificial intelligence system, as that term is defined by Section 551.001, and to the notification of a breach of security of the processor’s system under Chapter 521, taking into account the nature of processing and the information available to the processor; and

(3)  providing necessary information to enable the controller to conduct and document data protection assessments under Section 541.105.

SECTION 4.  Title 11, Business & Commerce Code, is amended by adding Subtitle D to read as follows:

SUBTITLE D.  ARTIFICIAL INTELLIGENCE PROTECTION

CHAPTER 551.  GENERAL PROVISIONS

Sec. 551.001.  DEFINITIONS.  In this subtitle:

(1)  “Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

(2)  “Consumer” means an individual who is a resident of this state acting only in an individual or household context.  The term does not include an individual acting in a commercial or employment context.

(3)  “Council” means the Texas Artificial Intelligence Council established under Chapter 554.

Sec. 551.002.  APPLICABILITY OF SUBTITLE.  This subtitle applies only to a person who:

(1)  promotes, advertises, or conducts business in this state;

(2)  produces a product or service used by residents of this state; or

(3)  develops or deploys an artificial intelligence system in this state.

Sec. 551.003.  CONSTRUCTION AND APPLICATION OF SUBTITLE.  This subtitle shall be broadly construed and applied to promote its underlying purposes, which are to:

(1)  facilitate and advance the responsible development and use of artificial intelligence systems;

(2)  protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems;

(3)  provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and

(4)  provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.

CHAPTER 552.  ARTIFICIAL INTELLIGENCE PROTECTION

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 552.001.  DEFINITIONS.  In this chapter:

(1)  “Deployer” means a person who deploys an artificial intelligence system for use in this state.

(2)  “Developer” means a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state.

(3)  “Governmental entity” means any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state.  The term does not include:

(A)  a hospital district created under the Health and Safety Code or Article IX, Texas Constitution; or

(B)  an institution of higher education, as defined by Section 61.003, Education Code, including any university system or any component institution of the system.

Sec. 552.002.  CONSTRUCTION OF CHAPTER.  This chapter may not be construed to:

(1)  impose a requirement on a person that adversely affects the rights or freedoms of any person, including the right of free speech; or

(2)  authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance.

Sec. 552.003.  LOCAL PREEMPTION.  This chapter supersedes and preempts any ordinance, resolution, rule, or other regulation adopted by a political subdivision regarding the use of artificial intelligence systems.

SUBCHAPTER B. DUTIES AND PROHIBITIONS ON USE OF ARTIFICIAL INTELLIGENCE

Sec. 552.051.  DISCLOSURE TO CONSUMERS.  (a)  In this section, “health care services” means services related to human health or to the diagnosis, prevention, or treatment of a human disease or impairment provided by an individual licensed, registered, or certified under applicable state or federal law to provide those services.

(b)  A governmental agency that makes available an artificial intelligence system intended to interact with consumers shall disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system.

(c)  A person is required to make the disclosure under Subsection (b) regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.

(d)  A disclosure under Subsection (b):

(1)  must be clear and conspicuous;

(2)  must be written in plain language; and

(3)  may not use a dark pattern, as that term is defined by Section 541.001.

(e)  A disclosure under Subsection (b) may be provided by using a hyperlink to direct a consumer to a separate Internet web page.

(f)  If an artificial intelligence system is used in relation to health care service or treatment, the provider of the service or treatment shall provide the disclosure under Subsection (b) to the recipient of the service or treatment or the recipient’s personal representative not later than the date the service or treatment is first provided, except in the case of emergency, in which case the provider shall provide the required disclosure as soon as reasonably possible.

Sec. 552.052.  MANIPULATION OF HUMAN BEHAVIOR.  A person may not develop or deploy an artificial intelligence system in a manner that intentionally aims to incite or encourage a person to:

(1)  commit physical self-harm, including suicide;

(2)  harm another person; or

(3)  engage in criminal activity.

Sec. 552.053.  SOCIAL SCORING.  A governmental entity may not use or deploy an artificial intelligence system that evaluates or classifies a natural person or group of natural persons based on social behavior or personal characteristics, whether known, inferred, or predicted, with the intent to calculate or assign a social score or similar categorical estimation or valuation of the person or group of persons that results or may result in:

(1)  detrimental or unfavorable treatment of a person or group of persons in a social context unrelated to the context in which the behavior or characteristics were observed or noted;

(2)  detrimental or unfavorable treatment of a person or group of persons that is unjustified or disproportionate to the nature or gravity of the observed or noted behavior or characteristics; or

(3)  the infringement of any right guaranteed under the United States Constitution, the Texas Constitution, or state or federal law.

Sec. 552.054.  CAPTURE OF BIOMETRIC DATA.  (a)  In this section, “biometric data” means data generated by automatic measurements of an individual’s biological characteristics.  The term includes a fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual.  The term does not include a physical or digital photograph or data generated from a physical or digital photograph, a video or audio recording or data generated from a video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations under the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. Section 1320d et seq.).

(b)  A governmental entity may not develop or deploy an artificial intelligence system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law.

(c)  A violation of Section 503.001 is a violation of this section.

Sec. 552.055.  CONSTITUTIONAL PROTECTION.  (a)  A person may not develop or deploy an artificial intelligence system with the sole intent for the artificial intelligence system to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution.

(b)  This section is remedial in purpose and may not be construed to create or expand any right guaranteed by the United States Constitution.

Sec. 552.056.  UNLAWFUL DISCRIMINATION.  (a)  In this section:

(1)  “Financial institution” has the meaning assigned by Section 201.101, Finance Code.

(2)  “Insurance entity” means:

(A)  an entity described by Section 82.002(a), Insurance Code;

(B)  a fraternal benefit society regulated under Chapter 885, Insurance Code; or

(C)  the developer of an artificial intelligence system used by an entity described by Paragraph (A) or (B).

(3)  “Protected class” means a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.

(b)  A person may not develop or deploy an artificial intelligence system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.

(c)  For purposes of this section, a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.

(d)  This section does not apply to an insurance entity for purposes of providing insurance services if the entity is subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance.

(e)  A federally insured financial institution is considered to be in compliance with this section if the institution complies with all federal and state banking laws and regulations.

Sec. 552.057.  CERTAIN SEXUALLY EXPLICIT CONTENT AND CHILD PORNOGRAPHY.  A person may not:

(1)  develop or distribute an artificial intelligence system with the sole intent of producing, assisting or aiding in producing, or distributing:

(A)  visual material in violation of Section 43.26, Penal Code; or

(B)  deep fake videos or images in violation of Section 21.165, Penal Code; or

(2)  intentionally develop or distribute an artificial intelligence system that engages in text-based conversations that simulate or describe sexual conduct, as that term is defined by Section 43.25, Penal Code, while impersonating or imitating a child younger than 18 years of age.

SUBCHAPTER C.  ENFORCEMENT

Sec. 552.101.  ENFORCEMENT AUTHORITY.  (a)  The attorney general has exclusive authority to enforce this chapter, except to the extent provided by Section 552.106.

(b)  This chapter does not provide a basis for, and is not subject to, a private right of action for a violation of this chapter or any other law.

Sec. 552.102.  INFORMATION AND COMPLAINTS.  The attorney general shall create and maintain an online mechanism on the attorney general’s Internet website through which a consumer may submit a complaint under this chapter to the attorney general.

Sec. 552.103.  INVESTIGATIVE AUTHORITY.  (a)  If the attorney general receives a complaint through the online mechanism under Section 552.102 alleging a violation of this chapter, the attorney general may issue a civil investigative demand to determine if a violation has occurred.  The attorney general shall issue demands in accordance with and under the procedures established under Section 15.10.

(b)  The attorney general may request from the person reported through the online mechanism, pursuant to a civil investigative demand issued under Subsection (a):

(1)  a high-level description of the purpose, intended use, deployment context, and associated benefits of the artificial intelligence system with which the person is affiliated;

(2)  a description of the type of data used to program or train the artificial intelligence system;

(3)  a high-level description of the categories of data processed as inputs for the artificial intelligence system;

(4)  a high-level description of the outputs produced by the artificial intelligence system;

(5)  any metrics the person uses to evaluate the performance of the artificial intelligence system;

(6)  any known limitations of the artificial intelligence system;

(7)  a high-level description of the post-deployment monitoring and user safeguards the person uses for the artificial intelligence system, including, if the person is a deployer, the oversight, use, and learning process established by the person to address issues arising from the system’s deployment; or

(8)  any other relevant documentation reasonably necessary for the attorney general to conduct an investigation under this section.

Sec. 552.104.  NOTICE OF VIOLATION; OPPORTUNITY TO CURE.  (a)  If the attorney general determines that a person has violated or is violating this chapter, the attorney general shall notify the person in writing of the determination, identifying the specific provisions of this chapter the attorney general alleges have been or are being violated.

(b)  The attorney general may not bring an action against the person:

(1)  before the 60th day after the date the attorney general provides the notice under Subsection (a); or

(2)  if, before the 60th day after the date the attorney general provides the notice under Subsection (a), the person:

(A)  cures the identified violation; and

(B)  provides the attorney general with a written statement that the person has:

(i)  cured the alleged violation;

(ii)  provided supporting documentation to show the manner in which the person cured the violation; and

(iii)  made any necessary changes to internal policies to reasonably prevent further violation of this chapter.

Sec. 552.105.  CIVIL PENALTY; INJUNCTION.  (a)  A person who violates this chapter and does not cure the violation under Section 552.104 is liable to this state for a civil penalty in an amount of:

(1)  for each violation the court determines to be curable or a breach of a statement submitted to the attorney general under Section 552.104(b)(2), not less than $10,000 and not more than $12,000;

(2)  for each violation the court determines to be uncurable, not less than $80,000 and not more than $200,000; and

(3)  for a continued violation, not less than $2,000 and not more than $40,000 for each day the violation continues.

(b)  The attorney general may bring an action in the name of this state to:

(1)  collect a civil penalty under this section;

(2)  seek injunctive relief against further violation of this chapter; and

(3)  recover attorney’s fees and reasonable court costs or other investigative expenses.

(c)  There is a rebuttable presumption that a person used reasonable care as required under this chapter.

(d)  A defendant in an action under this section may seek an expedited hearing or other process, including a request for declaratory judgment, if the person believes in good faith that the person has not violated this chapter.

(e)  A defendant in an action under this section may not be found liable if:

(1)  another person uses the artificial intelligence system affiliated with the defendant in a manner prohibited by this chapter; or

(2)  the defendant discovers a violation of this chapter through:

(A)  feedback from a developer, deployer, or other person who believes a violation has occurred;

(B)  testing, including adversarial testing or red-team testing;

(C)  following guidelines set by applicable state agencies; or

(D)  if the defendant substantially complies with the most recent version of the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” published by the National Institute of Standards and Technology or another nationally or internationally recognized risk management framework for artificial intelligence systems, an internal review process.

(f)  The attorney general may not bring an action to collect a civil penalty under this section against a person for an artificial intelligence system that has not been deployed.

Sec. 552.106.  ENFORCEMENT ACTIONS BY STATE AGENCIES.  (a)  A state agency may impose sanctions against a person licensed, registered, or certified by that agency for a violation of Subchapter B if:

(1)  the person has been found in violation of this chapter under Section 552.105; and

(2)  the attorney general has recommended additional enforcement by the applicable agency.

(b)  Sanctions under this section may include:

(1)  suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and

(2)  a monetary penalty not to exceed $100,000.

CHAPTER 553.  ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 553.001.  DEFINITIONS.  In this chapter:

(1)  “Applicable agency” means a department of this state established by law to regulate certain types of business activity in this state and the people engaging in that business, including the issuance of licenses and registrations, that the department determines would regulate a program participant if the person were not operating under this chapter.

(2)  “Department” means the Texas Department of Information Resources.

(3)  “Program” means the regulatory sandbox program established under this chapter that allows a person, without being licensed or registered under the laws of this state, to test an artificial intelligence system for a limited time and on a limited basis.

(4)  “Program participant” means a person whose application to participate in the program is approved and who may test an artificial intelligence system under this chapter.

SUBCHAPTER B.  SANDBOX PROGRAM FRAMEWORK

Sec. 553.051.  ESTABLISHMENT OF SANDBOX PROGRAM.  (a)  The department, in consultation with the council, shall create a regulatory sandbox program that enables a person to obtain legal protection and limited access to the market in this state to test innovative artificial intelligence systems without obtaining a license, registration, or other regulatory authorization.

(b)  The program is designed to:

(1)  promote the safe and innovative use of artificial intelligence systems across various sectors including healthcare, finance, education, and public services;

(2)  encourage responsible deployment of artificial intelligence systems while balancing the need for consumer protection, privacy, and public safety;

(3)  provide clear guidelines for a person who develops an artificial intelligence system to test systems while certain laws and regulations related to the testing are waived or suspended; and

(4)  allow a person to engage in research, training, testing, or other pre-deployment activities to develop an artificial intelligence system.

(c)  The attorney general may not file or pursue charges against a program participant for violation of a law or regulation waived under this chapter that occurs during the testing period.

(d)  A state agency may not file or pursue punitive action against a program participant, including the imposition of a fine or the suspension or revocation of a license, registration, or other authorization, for violation of a law or regulation waived under this chapter that occurs during the testing period.

(e)  Notwithstanding Subsections (c) and (d), the requirements of Subchapter B, Chapter 552, may not be waived, and the attorney general or a state agency may file or pursue charges or action against a program participant who violates that subchapter.

Sec. 553.052.  APPLICATION FOR PROGRAM PARTICIPATION.  (a)  A person must obtain approval from the department and any applicable agency before testing an artificial intelligence system under the program.

(b)  The department by rule shall prescribe the application form.  The form must require the applicant to:

(1)  provide a detailed description of the artificial intelligence system the applicant desires to test in the program, and its intended use;

(2)  include a benefit assessment that addresses potential impacts on consumers, privacy, and public safety;

(3)  describe the applicant’s plan for mitigating any adverse consequences that may occur during the test; and

(4)  provide proof of compliance with any applicable federal artificial intelligence laws and regulations.

Sec. 553.053.  DURATION AND SCOPE OF PARTICIPATION.  (a)  A program participant approved by the department and each applicable agency may test and deploy an artificial intelligence system under the program for a period of not more than 36 months.

(b)  The department may extend a test under this chapter if the department finds good cause for the test to continue.

Sec. 553.054.  EFFICIENT USE OF RESOURCES.  The department shall coordinate the activities under this subchapter and any other law relating to artificial intelligence systems to ensure efficient system implementation and to streamline the use of department resources, including information sharing and personnel.

SUBCHAPTER C.  OVERSIGHT AND COMPLIANCE

Sec. 553.101.  COORDINATION WITH APPLICABLE AGENCY.  (a)  The department shall coordinate with all applicable agencies to oversee the operation of a program participant.

(b)  The council or an applicable agency may recommend to the department that a program participant be removed from the program if the council or applicable agency finds that the program participant’s artificial intelligence system:

(1)  poses an undue risk to public safety or welfare;

(2)  violates any federal law or regulation; or

(3)  violates any state law or regulation not waived under the program.

Sec. 553.102.  PERIODIC REPORT BY PROGRAM PARTICIPANT.  (a)  A program participant shall provide a quarterly report to the department.

(b)  The report shall include:

(1)  metrics for the artificial intelligence system’s performance;

(2)  updates on how the artificial intelligence system mitigates any risks associated with its operation; and

(3)  feedback from consumers and affected stakeholders that are using an artificial intelligence system tested under this chapter.

(c)  The department shall maintain confidentiality regarding the intellectual property, trade secrets, and other sensitive information it obtains through the program.

Sec. 553.103.  ANNUAL REPORT BY DEPARTMENT.  (a)  The department shall submit an annual report to the legislature.

(b)  The report shall include:

(1)  the number of program participants testing an artificial intelligence system in the program;

(2)  the overall performance and impact of artificial intelligence systems tested in the program; and

(3)  recommendations on changes to laws or regulations for future legislative consideration.

CHAPTER 554.  TEXAS ARTIFICIAL INTELLIGENCE COUNCIL

SUBCHAPTER A.  CREATION AND ORGANIZATION OF COUNCIL

Sec. 554.001.  CREATION OF COUNCIL.  (a)  The Texas Artificial Intelligence Council is created to:

(1)  ensure artificial intelligence systems in this state are ethical and developed in the public’s best interest;

(2)  ensure artificial intelligence systems in this state do not harm public safety or undermine individual freedoms by finding issues and making recommendations to the legislature regarding the Penal Code and Chapter 82, Civil Practice and Remedies Code;

(3)  identify existing laws and regulations that impede innovation in the development of artificial intelligence systems and recommend appropriate reforms;

(4)  analyze opportunities to improve the efficiency and effectiveness of state government operations through the use of artificial intelligence systems;

(5)  make recommendations to applicable state agencies regarding the use of artificial intelligence systems to improve the agencies’ efficiency and effectiveness;

(6)  evaluate potential instances of regulatory capture, including undue influence by technology companies or disproportionate burdens on smaller innovators caused by the use of artificial intelligence systems;

(7)  evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users through the use of artificial intelligence systems;

(8)  offer guidance and recommendations to the legislature on the ethical and legal use of artificial intelligence systems;

(9)  conduct and publish the results of a study on the current regulatory environment for artificial intelligence systems;

(10)  receive reports from the Department of Information Resources regarding the regulatory sandbox program under Chapter 553; and

(11)  make recommendations for improvements to the regulatory sandbox program under Chapter 553.

(b)  The council is administratively attached to the Department of Information Resources, and the department shall provide administrative support to the council as provided by this section.

(c)  The Department of Information Resources and the council shall enter into a memorandum of understanding detailing:

(1)  the administrative support the council requires from the department to fulfill the council’s purposes;

(2)  the reimbursement of administrative expenses to the department; and

(3)  any other provisions necessary to ensure the efficient operation of the council.

Sec. 554.002.  COUNCIL MEMBERSHIP.  (a)  The council is composed of seven members as follows:

(1)  three members of the public appointed by the governor;

(2)  two members of the public appointed by the lieutenant governor; and

(3)  two members of the public appointed by the speaker of the house of representatives.

(b)  Members of the council serve staggered four-year terms, with the terms of three or four members expiring every two years.

(c)  The governor shall appoint a chair from among the members, and the council shall elect a vice chair from its membership.

(d)  The council may establish an advisory board composed of individuals from the public who possess expertise directly related to the council’s functions, including technical, ethical, regulatory, and other relevant areas.

Sec. 554.003.  QUALIFICATIONS.  Members of the council must be Texas residents and have knowledge or expertise in one or more of the following areas:

(1)  artificial intelligence systems;

(2)  data privacy and security;

(3)  ethics in technology or law;

(4)  public policy and regulation;

(5)  risk management related to artificial intelligence systems;

(6)  improving the efficiency and effectiveness of governmental operations; or

(7)  anticompetitive practices and market fairness.

Sec. 554.004.  STAFF AND ADMINISTRATION.  The council may hire an executive director and other personnel as necessary to perform its duties.

SUBCHAPTER B.  POWERS AND DUTIES OF COUNCIL

Sec. 554.101.  ISSUANCE OF REPORTS.  (a)  The council may issue reports to the legislature regarding the use of artificial intelligence systems in this state.

(b)  The council may issue reports on:

(1)  the compliance of artificial intelligence systems in this state with the laws of this state;

(2)  the ethical implications of deploying artificial intelligence systems in this state;

(3)  data privacy and security concerns related to artificial intelligence systems in this state; or

(4)  potential liability or legal risks associated with the use of artificial intelligence systems in this state.

Sec. 554.102.  TRAINING AND EDUCATIONAL OUTREACH.  The council shall conduct training programs for state agencies and local governments on the use of artificial intelligence systems.

Sec. 554.103.  LIMITATION OF AUTHORITY.  The council may not:

(1)  adopt rules or promulgate guidance that is binding for any entity;

(2)  interfere with or override the operation of a state agency; or

(3)  perform a duty or exercise a power not granted by this chapter.

SECTION 5.  Section 325.011, Government Code, is amended to read as follows:

Sec. 325.011.  CRITERIA FOR REVIEW.  The commission and its staff shall consider the following criteria in determining whether a public need exists for the continuation of a state agency or its advisory committees or for the performance of the functions of the agency or its advisory committees:

(1)  the efficiency and effectiveness with which the agency or the advisory committee operates;

(2)(A)  an identification of the mission, goals, and objectives intended for the agency or advisory committee and of the problem or need that the agency or advisory committee was intended to address; and

(B)  the extent to which the mission, goals, and objectives have been achieved and the problem or need has been addressed;

(3)(A)  an identification of any activities of the agency in addition to those granted by statute and of the authority for those activities; and

(B)  the extent to which those activities are needed;

(4)  an assessment of authority of the agency relating to fees, inspections, enforcement, and penalties;

(5)  whether less restrictive or alternative methods of performing any function that the agency performs could adequately protect or provide service to the public;

(6)  the extent to which the jurisdiction of the agency and the programs administered by the agency overlap or duplicate those of other agencies, the extent to which the agency coordinates with those agencies, and the extent to which the programs administered by the agency can be consolidated with the programs of other state agencies;

(7)  the promptness and effectiveness with which the agency addresses complaints concerning entities or other persons affected by the agency, including an assessment of the agency’s administrative hearings process;

(8)  an assessment of the agency’s rulemaking process and the extent to which the agency has encouraged participation by the public in making its rules and decisions and the extent to which the public participation has resulted in rules that benefit the public;

(9)  the extent to which the agency has complied with:

(A)  federal and state laws and applicable rules regarding equality of employment opportunity and the rights and privacy of individuals; and

(B)  state law and applicable rules of any state agency regarding purchasing guidelines and programs for historically underutilized businesses;

(10)  the extent to which the agency issues and enforces rules relating to potential conflicts of interest of its employees;

(11)  the extent to which the agency complies with Chapters 551 and 552 and follows records management practices that enable the agency to respond efficiently to requests for public information;

(12)  the effect of federal intervention or loss of federal funds if the agency is abolished;

(13)  the extent to which the purpose and effectiveness of reporting requirements imposed on the agency justifies the continuation of the requirement; [and]

(14)  an assessment of the agency’s cybersecurity practices using confidential information available from the Department of Information Resources or any other appropriate state agency; and

(15)  an assessment of the agency’s use of artificial intelligence systems, as that term is defined by Section 551.001, Business & Commerce Code, in its operations and its oversight of the use of artificial intelligence systems by persons under the agency’s jurisdiction, and any related impact on the agency’s ability to achieve its mission, goals, and objectives, made using information available from the Department of Information Resources, the attorney general, or any other appropriate state agency.

SECTION 6.  Section 2054.068(b), Government Code, is amended to read as follows:

(b)  The department shall collect from each state agency information on the status and condition of the agency’s information technology infrastructure, including information regarding:

(1)  the agency’s information security program;

(2)  an inventory of the agency’s servers, mainframes, cloud services, and other information technology equipment;

(3)  identification of vendors that operate and manage the agency’s information technology infrastructure; [and]

(4)  any additional related information requested by the department; and

(5)  an evaluation of the use or considered use of artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, by each state agency.

SECTION 7.  Section 2054.0965(b), Government Code, is amended to read as follows:

(b)  Except as otherwise modified by rules adopted by the department, the review must include:

(1)  an inventory of the agency’s major information systems, as defined by Section 2054.008, and other operational or logistical components related to deployment of information resources as prescribed by the department;

(2)  an inventory of the agency’s major databases, artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, and applications;

(3)  a description of the agency’s existing and planned telecommunications network configuration;

(4)  an analysis of how information systems, components, databases, applications, and other information resources have been deployed by the agency in support of:

(A)  applicable achievement goals established under Section 2056.006 and the state strategic plan adopted under Section 2056.009;

(B)  the state strategic plan for information resources; and

(C)  the agency’s business objectives, mission, and goals;

(5)  agency information necessary to support the state goals for interoperability and reuse; and

(6)  confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources.

SECTION 8.  Not later than September 1, 2026, the attorney general shall post on the attorney general’s Internet website the information and online mechanism required by Section 552.102, Business & Commerce Code, as added by this Act.

SECTION 9.  (a)  Notwithstanding any other section of this Act, in a state fiscal year, a state agency to which this Act applies is not required to implement a provision found in another section of this Act that is drafted as a mandatory provision imposing a duty on the agency to take an action unless money is specifically appropriated to the agency for that fiscal year to carry out that duty.  The agency may implement the provision in that fiscal year to the extent other funding is available to the agency to do so.

(b)  If, as authorized by Subsection (a) of this section, the state agency does not implement the mandatory provision in a state fiscal year, the state agency, in its legislative budget request for the next state fiscal biennium, shall certify that fact to the Legislative Budget Board and include a written estimate of the costs of implementing the provision in each year of that next state fiscal biennium.

SECTION 10.  This Act takes effect January 1, 2026.

    President of the Senate           Speaker of the House      

I certify that H.B. No. 149 was passed by the House on April 23, 2025, by the following vote:  Yeas 146, Nays 3, 1 present, not voting; and that the House concurred in Senate amendments to H.B. No. 149 on May 30, 2025, by the following vote:  Yeas 121, Nays 17, 2 present, not voting.

______________________________

Chief Clerk of the House   

I certify that H.B. No. 149 was passed by the Senate, with amendments, on May 23, 2025, by the following vote:  Yeas 31, Nays 0.

______________________________

Secretary of the Senate   

APPROVED: __________________

                 Date       

          __________________

               Governor       


Appendix 2 — Model Ordinance: Responsible Use of Artificial Intelligence in City Operations

ORDINANCE NO. ______

AN ORDINANCE

relating to the responsible use of artificial intelligence systems by the City; establishing transparency, accountability, and oversight requirements; and providing for implementation and administration.

WHEREAS,

the City recognizes that artificial intelligence (“AI”) systems are increasingly used to improve operational efficiency, service delivery, data analysis, and internal workflows; and

WHEREAS,

the City further recognizes that certain uses of AI may influence decisions affecting residents, employees, vendors, or regulated parties and therefore require appropriate oversight; and

WHEREAS,

the City seeks to encourage responsible innovation while preserving public trust, transparency, and accountability; and

WHEREAS,

the Texas Legislature has enacted the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026, establishing statewide standards for AI use by government entities; and

WHEREAS,

the City recognizes that the adoption of artificial intelligence tools may, over time, change how work is performed and how staffing needs are structured, and that any such impacts are expected to occur gradually through attrition, reassignment, or role redesign rather than immediate workforce reductions;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF __________, TEXAS:

Section 1. Definitions

For purposes of this Ordinance:

  1. “Artificial Intelligence System” means a computational system that uses machine learning, statistical modeling, or related techniques to perform tasks normally associated with human intelligence, including analysis, prediction, classification, content generation, or prioritization.
  2. “Decision-Adjacent AI” means an AI system that materially influences, prioritizes, or recommends outcomes related to enforcement, eligibility, allocation of resources, personnel actions, procurement decisions, or public services, even if final decisions are made by a human.
  3. “High-Risk AI Use” means deployment of an AI system that directly or indirectly affects individual rights, access to services, enforcement actions, or legally protected interests.
  4. “Department” means any City department, office, division, or agency.

Section 2. Permitted Use of Artificial Intelligence

(a) Internal Productivity Uses. Departments may deploy AI systems for internal productivity and analytical purposes, including but not limited to:

  • Drafting and summarization of documents
  • Data analysis and forecasting
  • Workflow automation
  • Research and internal reporting
  • Customer-service chat tools providing general information (with disclaimers as appropriate)

Such uses shall not require prior Council approval but shall be subject to internal documentation requirements.

(b) Decision-Adjacent Uses. AI systems that influence or support decisions affecting residents, employees, vendors, or regulated entities may be deployed only in accordance with Sections 3 and 4 of this Ordinance.

Section 3. Prohibited Uses

No Department shall deploy or use an AI system that:

  1. Performs social scoring of individuals or groups based on behavior, personal traits, or reputation for the purpose of denying services, benefits, or rights;
  2. Intentionally discriminates against a protected class in violation of state or federal law;
  3. Generates or deploys biometric identification or surveillance in violation of constitutional protections;
  4. Produces or facilitates unlawful deep-fake or deceptive content;
  5. Operates as a fully automated decision-making system without meaningful human review in matters affecting legal rights or obligations.

Section 4. Oversight and Approval for High-Risk AI Uses

(a) Inventory Requirement. The City Manager shall maintain a centralized AI Systems Inventory identifying:

  • Each AI system in use
  • The Department deploying the system
  • The system’s purpose
  • Whether the use is classified as high-risk

(b) Approval Process. Prior to deployment of any High-Risk AI Use, the Department must:

  1. Submit a written justification describing the system’s purpose and scope;
  2. Identify the data sources used by the system;
  3. Describe human oversight mechanisms;
  4. Obtain approval from:
    • The City Manager (or designee), and
    • The City Attorney for legal compliance review.

(c) Human Accountability. Each AI system shall have a designated human owner responsible for:

  • Monitoring performance
  • Responding to errors or complaints
  • Suspending use if risks are identified

Section 5. Transparency and Public Disclosure

(a) Disclosure to the Public. When a City AI system interacts directly with residents, the City shall provide clear notice that the interaction involves AI.

(b) Public Reporting. The City shall publish annually:

  • A summary of AI systems in use
  • The general purposes of high-risk AI systems
  • Contact information for public inquiries

No proprietary or security-sensitive information shall be disclosed.

Section 6. Procurement and Vendor Requirements

All City contracts involving AI systems shall, where applicable:

  1. Require disclosure of AI functions;
  2. Prohibit undisclosed algorithmic decision-making;
  3. Allow the City to audit or review AI system outputs relevant to City operations;
  4. Require vendors to notify the City of material changes to AI functionality.

Section 7. Review and Sunset

(a) Periodic Review. High-risk AI systems shall be reviewed at least annually to assess:

  • Accuracy
  • Bias
  • Continued necessity
  • Compliance with this Ordinance

(b) Sunset Authority. The City Manager may suspend or terminate use of any AI system that poses unacceptable risk or fails compliance review.

Section 8. Training

The City shall provide appropriate training to employees involved in:

  • Deploying AI systems
  • Supervising AI-assisted workflows
  • Interpreting AI-generated outputs

Section 9. Severability

If any provision of this Ordinance is held invalid, such invalidity shall not affect the remaining provisions.

Section 10. Effective Date

This Ordinance shall take effect immediately upon adoption.


Appendix 3 — City Manager Administrative Regulation: Responsible Use of Artificial Intelligence

ADMINISTRATIVE REGULATION NO. ___

Subject: Responsible Use of Artificial Intelligence (AI) in City Operations
Authority: Ordinance No. ___ (Responsible Use of Artificial Intelligence)
Issued by: City Manager
Effective Date: __________

1. Purpose

This Administrative Regulation establishes operational procedures for the responsible deployment, oversight, and monitoring of artificial intelligence (AI) systems used by the City, consistent with adopted Council policy and applicable state law.

The intent is to:

  • Enable rapid adoption of AI for productivity and service delivery;
  • Ensure transparency and accountability for higher-risk uses; and
  • Protect the City, employees, and residents from unintended consequences.

2. Scope

This regulation applies to all City departments, offices, and divisions that:

  • Develop, procure, deploy, or use AI systems; or
  • Rely on vendor-provided software that includes AI functionality.

3. AI System Classification

Departments shall classify AI systems into one of the following categories:

A. Tier 1 — Internal Productivity AI

Examples:

  • Document drafting and summarization
  • Data analysis and forecasting
  • Internal research and reporting
  • Workflow automation

Oversight Level:

  • Department-level approval
  • Registration in AI Inventory

B. Tier 2 — Decision-Adjacent AI

Examples:

  • Permit or inspection prioritization
  • Vendor or application risk scoring
  • Resource allocation recommendations
  • Enforcement or compliance triage

Oversight Level:

  • City Manager approval
  • Legal review
  • Annual performance review

C. Tier 3 — High-Risk AI

Examples:

  • AI influencing enforcement actions
  • Eligibility determinations
  • Public safety analytics
  • Biometric or surveillance tools

Oversight Level:

  • City Manager approval
  • City Attorney review
  • Documented human-in-the-loop controls
  • Annual audit and Council notification

4. AI Systems Inventory

The City Manager’s Office shall maintain a centralized AI Systems Inventory, which includes:

  • System name and vendor
  • Department owner
  • Purpose and classification tier
  • Date of deployment
  • Oversight requirements

Departments shall update the inventory prior to deploying any new AI system.

5. Approval Process

A. Tier 1 Systems

  • Approved by Department Director
  • Registered in inventory

B. Tier 2 and Tier 3 Systems

Departments must submit:

  1. A description of the system and intended use
  2. Data sources and inputs
  3. Description of human oversight
  4. Risk mitigation measures

Approval required from:

  • City Manager (or designee)
  • City Attorney (for legal compliance)

6. Human Oversight & Accountability

Each AI system shall have a designated System Owner responsible for:

  • Monitoring system outputs
  • Responding to errors or complaints
  • Suspending use if risks emerge
  • Coordinating audits or reviews

No AI system may operate as a fully autonomous decision-maker for actions affecting legal rights or obligations.

7. Vendor & Procurement Controls

Procurement involving AI systems shall:

  • Identify AI functionality explicitly in solicitations
  • Require vendors to disclose material AI updates
  • Prohibit undisclosed algorithmic decision-making
  • Preserve City audit and review rights

8. Monitoring, Review & Sunset

  • Tier 2 and Tier 3 systems shall undergo annual review.
  • Systems may be suspended or sunset if:
    • Accuracy degrades
    • Bias is identified
    • Legal risk increases
    • The system no longer serves a defined purpose

9. Training

Departments deploying AI shall ensure appropriate staff training covering:

  • Proper interpretation of AI outputs
  • Limitations of AI systems
  • Escalation and error-handling procedures

10. Reporting to Council

The City Manager shall provide Council with:

  • An annual summary of AI systems in use
  • Identification of Tier 3 (High-Risk) systems
  • Any material incidents or corrective actions

11. Effective Date

This Administrative Regulation is effective immediately upon issuance.

12. Workforce Considerations

The use of artificial intelligence systems may change job functions and workflows over time. Departments shall:

  • Use AI to augment employee capabilities wherever possible;
  • Prioritize retraining, reassignment, and natural attrition when workflows change;
  • Coordinate with Human Resources before deploying AI systems that materially alter job duties; and
  • Recognize that long-term staffing impacts, if any, remain subject to City Manager and City Council authority.

Appendix 4 — Public-Facing FAQ: Responsible Use of Artificial Intelligence in City Operations

What is this ordinance about?

This ordinance establishes clear rules for how the City may use artificial intelligence (AI) tools. It allows the City to use modern technology to improve efficiency and service delivery while ensuring that higher-risk uses are transparent, accountable, and overseen by people.

Is the City already using artificial intelligence?

Yes. Like most modern organizations, the City already uses limited AI-enabled tools for tasks such as document drafting, data analysis, customer service support, and vendor-provided software systems.

This ordinance ensures those tools are used consistently and responsibly.

Is this ordinance banning artificial intelligence?

No.
The ordinance does not ban AI. It encourages responsible adoption of AI for productivity and internal efficiency while placing guardrails on uses that could affect people’s rights or access to services.

Why is the City adopting rules now?

AI tools are becoming more common and more capable. Clear rules help ensure:

  • Transparency in how AI is used
  • Accountability for outcomes
  • Compliance with new Texas law
  • Public trust in City operations

The Texas Legislature recently enacted statewide standards for AI use by government entities, and this ordinance aligns the City with those expectations.

Will artificial intelligence affect City jobs?

AI may change how work is done over time, just as previous technologies have.

This ordinance does not authorize immediate workforce reductions. Any long-term impacts are expected to occur gradually and, where possible, through:

  • Natural attrition
  • Reassignment
  • Retraining
  • Changes in job duties

Final staffing decisions remain with City leadership and City Council.

Will AI replace City employees?

AI tools are intended to assist employees, not replace human judgment. For higher-risk uses, the ordinance requires meaningful human oversight and accountability.

Can AI make decisions about me automatically?

No.
The ordinance prohibits fully automated decision-making that affects legal rights, enforcement actions, or access to services without human review.

AI may provide information or recommendations, but people remain responsible for decisions.

Will the City use AI for surveillance or facial recognition?

The ordinance prohibits AI uses that violate constitutional protections, including improper biometric surveillance.

Any use of biometric or surveillance-related AI would require strict legal review and compliance with state and federal law.

How will I know if I’m interacting with AI?

If the City uses AI systems that interact directly with residents, the City must clearly disclose that you are interacting with an AI system.

Does this apply to police or public safety?

Yes.
AI tools used in public safety contexts are considered higher-risk and require additional review, approval, and oversight. AI systems may not independently make enforcement decisions.

Who is responsible if an AI system makes a mistake?

Each AI system has a designated City employee responsible for monitoring its use, addressing errors, and suspending the system if necessary.

Responsibility remains with the City—not the software.

Will the public be able to see how AI is used?

Yes.
The City will publish an annual summary describing:

  • The types of AI systems in use
  • Their general purpose
  • How residents can ask questions or raise concerns

Sensitive or proprietary information will not be disclosed.

Does this create a new board or bureaucracy?

No.
Oversight is handled through existing City leadership and administrative structures.

Is there a cost to adopting this ordinance?

There is no direct cost associated with adoption. Over time, responsible AI use may help control costs by improving productivity and efficiency.

How often will this policy be reviewed?

Higher-risk AI systems are reviewed annually. The ordinance itself may be updated as technology and law evolve.

Who can I contact with questions or concerns?

Residents may contact the City Manager’s Office or submit inquiries through the City’s website. Information on AI use and reporting channels will be publicly available.

Bottom Line

This ordinance ensures the City:

  • Uses modern tools responsibly
  • Maintains human accountability
  • Protects public trust
  • Aligns with Texas law
  • Adapts thoughtfully to technological change

The Municipal & Business Workquake of 2026: Why Cities Must Redesign Roles Now—Before Attrition Does It for Them

A collaboration between Lewis McLain & AI

Cities are about to experience an administrative shift that will look nothing like a “tech revolution” and nothing like a classic workforce reduction. It will arrive as a workquake: a sudden drop in the labor required to complete routine tasks across multiple departments, driven by AI systems that can ingest documents, apply rules, assemble outputs, and draft narratives at scale.

The danger is not that cities will replace everyone with software. The danger is more subtle and far more likely: cities will allow AI to hollow out core functions unintentionally, through non-replacement hiring, scattered tool adoption, and informal workflow shortcuts—until the organization’s accountability structure no longer matches the work being done.

In 2026, the right posture is not fascination or fear. It is proactive redesign.


I. The Real Change: Task Takeover, Not Job Replacement

Municipal roles often look “human” because they involve public trust, compliance, and service. But much of the day-to-day work inside those roles is structured:

  • collecting inputs
  • applying policy checklists
  • preparing standardized packets
  • producing routine reports
  • tracking deadlines
  • drafting summaries
  • reconciling variances
  • adding narrative to numbers

Those tasks are precisely what modern AI systems now handle with speed and consistency. What remains human is still vital—but it is narrower: judgment, discretion, ethics, and accountability.

That creates the same pattern across departments:

  • the production layer shrinks rapidly
  • the review and exception layer becomes the job

Cities that don’t define this shift early will experience it late—as a staffing and governance crisis.


II. Example- City Secretary: Where Governance Work Becomes Automated

The city secretary function sits at the center of formal governance: agendas, minutes, public notices, records, ordinances, and elections. Much of the labor in this area is procedural and document-heavy.

Tasks likely to be absorbed quickly

  • Agenda assembly from departmental submissions
  • Packet compilation and formatting
  • Deadline tracking for posting and notices
  • Records indexing and retrieval
  • Draft minutes from audio/video with time stamps
  • Ordinance/resolution histories and cross-references

What shrinks

  • clerical assembly roles
  • manual transcription
  • routine records handling

What becomes more important

  • legal compliance judgment (Open Meetings, Public Information)
  • defensibility of the record
  • election integrity protocols
  • final human review of public-facing outputs

In other words: the city secretary role does not disappear. It becomes governance QA—with higher stakes and fewer support layers.


III. Example – Purchasing & Procurement: Where Process Becomes Automated Screening

Purchasing has always been a mix of routine compliance and high-risk discretion. AI hits the routine side first, fast.

Tasks likely to be absorbed quickly

  • quote comparisons and bid tabulations
  • price benchmarking against history and peers
  • contract template population
  • insurance/required-doc compliance checks
  • renewal tracking and vendor performance summaries
  • anomaly detection (odd pricing, split purchases, policy exceptions)

What shrinks

  • bid tabulators
  • quote chasers
  • contract formatting staff
  • clerical procurement roles

What becomes more important

  • vendor disputes and negotiations
  • integrity controls (conflicts, favoritism risk)
  • exception approvals with documented reasoning
  • strategic sourcing decisions

Procurement shifts from “processing” to risk-managed decisioning.


IV. Example – Budget Analysts: Where “Analysis” Separates from “Assembly”

Budget offices are often mistaken as purely analytical. In reality, a large share of work is assembly: gathering departmental submissions, normalizing formats, building tables, writing routine narratives, and explaining variances.

Tasks likely to be absorbed quickly

  • ingestion and normalization of department requests
  • enforcement of submission rules and formatting
  • auto-generated variance explanations
  • draft budget narratives (department summaries, highlights)
  • scenario tables (base, constrained, growth cases)
  • continuous budget-to-actual reconciliation

What shrinks

  • entry-level budget analysts
  • table builders and narrative drafters
  • budget book production labor

What becomes more important

  • setting assumptions and policy levers
  • framing tradeoffs for leadership and council
  • long-range fiscal forecasting judgment
  • telling the truth clearly under political pressure

Budget staff shift from spreadsheet production to decision support and persuasion with integrity.


V. Example – Police & Fire Data Analysts: Where Reporting Becomes Real-Time Patterning

Public safety analytics is one of the most automatable municipal domains because it is data-rich, structured, and continuous. The “report builder” role is especially vulnerable.

Tasks likely to be absorbed quickly

  • automated monthly/quarterly performance reporting
  • response-time distribution analysis
  • hotspot mapping and geospatial summaries
  • staffing demand pattern detection
  • anomaly flagging (unusual patterns in calls, activity, response)
  • draft CompStat-style narratives and slide-ready briefings

What shrinks

  • manual report builders
  • map producers
  • dashboard-only roles
  • grant-report drafters relying on routine metrics

What becomes more important

  • human interpretation (what the pattern means operationally)
  • explaining limitations and avoiding false certainty
  • bias and fairness oversight
  • defensible analytics for court, public inquiry, or media scrutiny

Public safety analytics becomes less about producing charts and more about protecting truth and trust.


VI. Example – More Roles Next in Line

Permitting & Development Review

AI can quickly absorb:

  • completeness checks
  • code cross-referencing
  • workflow routing and status updates
  • templated staff reports

Humans remain essential for:

  • discretionary judgments
  • negotiation with applicants
  • interpreting ambiguous code situations
  • public-facing case management

HR Analysts

AI absorbs:

  • classification comparisons
  • market surveys and comp modeling
  • policy drafting and FAQ support

Humans remain for:

  • discipline, negotiations, sensitive cases
  • equity judgments and culture
  • leadership counsel and conflict resolution

Grants Management

AI absorbs:

  • opportunity scanning and matching
  • compliance calendars
  • draft narrative sections and attachments lists

Humans remain for:

  • strategy (which grants matter)
  • partnerships and commitments
  • risk management and audit defense

VII. The Practical Reality in Cities: Attrition Is the Mechanism

This won’t arrive as dramatic layoffs. It will arrive as:

  • hiring freezes
  • “we won’t backfill that position”
  • consolidation of roles
  • sudden expectations that one person can do what three used to do

If cities do nothing, AI will still be adopted—piecemeal, unevenly, and without governance redesign. That produces an organization with:

  • fewer people
  • unclear accountability
  • heavier compliance risk
  • fragile institutional memory

VIII. What “Proactive” Looks Like in 2026

Cities need to act immediately in four practical ways:

  1. Define what must remain human
    • elections integrity
    • public record defensibility
    • procurement exceptions and ethics
    • budget assumption-setting and council framing
    • public safety interpretation and bias oversight
  2. Separate production from review
    • let AI assemble
    • require humans to verify, approve, and own
  3. Rewrite job descriptions now
    • stop hiring for assembly work
    • hire for judgment, auditing, communication, and governance
  4. Build the governance layer
    • standards for AI outputs
    • audit trails
    • transparency policies
    • escalation rules
    • periodic review of AI-driven decisions

This is not an IT upgrade. It’s a redesign of how public authority is exercised.


Conclusion: The Choice Cities Face

Cities will adopt AI regardless—because the savings and speed will be undeniable. The only choice is whether the city adopts AI intentionally or accidentally.

If adopted intentionally, AI becomes:

  • a productivity tool
  • a compliance enhancer
  • a service accelerator

If adopted accidentally, AI becomes:

  • a quiet hollowing of institutional capacity
  • a transfer of control from policy to tool
  • and eventually a governance failure that will be blamed on people who never had the chance to redesign the system

2026 is early enough to steer the transition.
Waiting will not preserve the old model. It will only ensure the new one arrives without a plan.

End note: I usually spend a couple of days (minimum) completing the compilation of all my bank and credit card records, assigning a classification, summarizing and giving my CPA a complete set of documents. I uploaded the documents to AI, gave it instructions to prepare the package, answering a list of questions regarding reconciliation and classification issues. Two hours later, I had the full package with comparisons to past years from the returns I also uploaded. I was 100% ready on New Year’s Eve just waiting for the 1099’s to be sent to me by the end of January. Meanwhile, I have been having AI enhance and create a comprehensive accounting system with beautiful schedules like cash flow, taxation notes, checklists with new IRS rules and general help – more than I was getting from CPA. I’ll be able to actually take over the CPA duties. It’s just the start of the things I can turn over to AI while I become the editor and reviewer instead of the dreaded grunt work. LFM

The Infrastructure We Don’t See: Aging Gas Systems, Hidden Risks, and the Case for Annual Accountability

A collaboration between Lewis McLain & AI

It’s not if, but when!

Natural gas infrastructure is the most invisible—and therefore the most misunderstood—critical system in modern cities. Power lines are visible. Water mains announce themselves through pressure and flow. Roads crack and bridges age in plain sight. But gas lines remain buried, silent, and largely forgotten—until something goes wrong.

That invisibility is not benign. It creates a governance gap where responsibility is fragmented, risk is assumed rather than measured, and accountability is episodic instead of continuous. As cities grow denser, older, and more complex, that gap widens.

This essay makes a simple but demanding case: cities should require annual, technical accountability briefings from gas utilities and structured gas-safety evaluations for high-occupancy buildings—public and private—because safety is no longer assured by age, ownership boundaries, or regulatory compliance alone.

The ultimate question is not whether gas systems are regulated. They are.
The question is whether, at the local level, we are actually safer than we were a year ago.


I. The Aging Gas Network: A Technical Reality, Not a Hypothetical

Much of the U.S. gas distribution network was installed decades ago. While significant modernization has occurred, legacy materials—particularly cast iron and bare steel—still exist in pockets, often in the very neighborhoods where density, redevelopment, and consequence are highest.

These systems age in predictable ways:

  • Material degradation such as corrosion, joint failure, and metal fatigue
  • Ground movement from expansive soils, drought cycles, and freeze–thaw conditions
  • Pressure cycling driven by modern load variability
  • Construction interaction, including third-party damage during roadway, utility, and redevelopment projects

Technically speaking, aging is not a binary condition. It is a curve. Systems do not fail all at once; they fail where stress, material fatigue, and external disturbance intersect. Cities that approve redevelopment without understanding where those intersections lie are not managing risk—they are inheriting it.


II. Monitoring Is Better Than Ever—But It Is Not Replacement

Modern gas utilities deploy advanced leak detection technologies that did not exist a generation ago: mobile survey vehicles, high-sensitivity handheld sensors, aerial detection, and in some cases continuous monitoring.

Regulatory standards have improved as well. Leak surveys are more frequent, detection thresholds are lower, and repair timelines are clearer. From a technical standpoint, the industry is better at finding leaks than it was even a few years ago.

But monitoring is inherently reactive. It detects deterioration after it has begun. It does not restore structural integrity. It does not change the age profile of the system. It does not eliminate brittle joints or corrosion-prone materials.

Replacement is the only permanent risk reduction. And replacement is expensive, disruptive, and largely invisible unless cities require it to be discussed openly.


III. Why Annual Gas Utility Accountability Briefings Are Essential

Gas utilities operate under long-range capital replacement programs driven by regulatory approval, rate recovery, and internal prioritization models. Cities operate under land-use approvals, zoning changes, density increases, and redevelopment pressures that can change risk far faster than infrastructure plans adjust.

An annual gas utility accountability briefing is how those two worlds reconnect.

Not a promotional update. Not a general safety overview. But a technical, decision-grade briefing that allows city leadership to understand:

  • What materials remain in the ground
  • Where risk is concentrated
  • How fast legacy systems are being retired
  • Whether replacement is keeping pace with growth
  • Where development decisions may be increasing consequence

Without this, cities are effectively approving new intensity above ground while assuming adequacy below it.


IV. The Forgotten Segment: From the Meter to the Building

Most gas incidents that injure people do not originate in transmission pipelines or deep mains. They occur closest to occupied space—often in the short stretch between the gas meter and the building structure.

Legally, responsibility is clear:

  • The utility owns and maintains the system up to the meter.
  • The property owner owns everything downstream.

Assessment, however, is not.

Post-meter gas piping is frequently:

  • Older steel without modern corrosion protection
  • Stressed by foundation movement
  • Altered during remodels and additions
  • Poorly documented
  • Rarely inspected after initial construction

Utilities generally do not inspect customer-owned piping. Building departments see it only during permitted work. Fire departments respond after leaks are reported. Property owners often do not realize they own it.

This creates a true orphaned asset class: high-consequence infrastructure with no lifecycle oversight.


V. Responsibility Alone Is Not Safety

Cities often take comfort in the legal distinction: “That’s private property.” Legally, that is correct. Practically, it is insufficient.

Gas does not respect ownership boundaries. A failure inside a school, apartment building, restaurant, or nursing home becomes a public emergency immediately.

Risk governance does not require cities to assume liability. It requires them to ensure that someone is actually evaluating risk in places where failure would have severe consequences.


VI. Required Gas-Safety Evaluations for High-Occupancy Properties

This is the missing pillar of modern gas safety.

Just as elevators, fire suppression systems, and boilers undergo periodic inspection, gas piping systems in high-occupancy buildings should be subject to structured evaluation—regardless of whether the building is publicly or privately owned.

Facilities warranting mandatory evaluation include:

  • Schools (public and private)
  • Daycares
  • Nursing homes and assisted-living facilities
  • Hospitals and clinics
  • Large multifamily buildings
  • Assembly venues (churches, theaters, gyms)
  • Restaurants and food-service establishments
  • High-load commercial and industrial users

These are places where evacuation is difficult, ignition sources are common, and consequences are magnified.

A gas-safety evaluation should assess:

  • Condition and material of post-meter piping
  • Corrosion, support, and anchoring
  • Stress at building entry points
  • Evidence of undocumented modifications or abandoned lines
  • Accessibility and labeling of shutoff valves

These evaluations need not be frequent. They need to be periodic, triggered, and credible.


VII. Triggers That Make the System Work

Cities can implement this framework without blanket inspections by tying evaluations to specific events:

  • Change of occupancy or use
  • Major remodels or additions
  • Buildings reaching certain age thresholds when work is permitted
  • Repeated gas odor or leak responses
  • Sale or transfer of high-occupancy properties

This approach focuses effort where risk is most likely to have changed.


VIII. Public vs. Private: One Standard of Care

A gas explosion in a public school is not meaningfully different from one in a private daycare or restaurant. The victims do not care who owned the pipe.

A city that limits safety evaluation requirements to public buildings is acknowledging risk—but only partially. The standard should be risk-based, not ownership-based.


IX. Are We Better or Worse Off Than a Year Ago?

Technically, the answer is nuanced.

We are better off nationally in detection capability and regulatory clarity. Technology has improved. Survey frequency has increased. Reporting is stronger.

But many cities are likely worse off locally in exposure:

  • Buildings are older
  • Density is higher
  • Construction activity is heavier
  • Post-meter piping remains largely unassessed
  • High-occupancy facilities rely on outdated assumptions

So the honest answer is this:

We are better at finding problems—but not necessarily better at eliminating risk where people live, work, and gather.


X. Governance Is the Missing Link

Gas safety is no longer only an engineering problem. It is a governance problem.

Cities already regulate:

  • Land use and density
  • Building permits and occupancy
  • Business licensing
  • Emergency response coordination

Requiring annual gas utility accountability briefings and targeted gas-safety evaluations does not expand government arbitrarily. It closes a blind spot that modern urban conditions have exposed.


Conclusion: Asking the Right Question, Every Year

The most important question cities should ask annually is not:

“Did the utility comply with regulations?”

It is:

“Given our growth, our buildings, and our infrastructure, are we actually safer than we were last year?”

If city leaders cannot answer that clearly—above ground and below—it is not because the answer is unknowable.

It is because no one has required it to be known.


**Appendix A

Model Ordinance: Gas Infrastructure Accountability and High-Occupancy Safety Evaluations**

This model ordinance is designed to improve transparency, situational awareness, and public safety without transferring ownership, operational control, or liability from utilities or property owners to the City.


Section 1. Purpose and Findings

1.1 Purpose

The purpose of this ordinance is to:

  1. Improve transparency regarding the condition, monitoring, and replacement of gas infrastructure;
  2. Ensure that risks associated with aging gas systems are identified and reduced over time;
  3. Require periodic gas safety evaluations for high-occupancy buildings where consequences of failure are greatest;
  4. Strengthen coordination among gas utilities, property owners, and City emergency services; and
  5. Establish consistent, decision-grade information for City leadership.

1.2 Findings

The City Council finds that:

  1. Natural gas infrastructure is largely underground and not visible to the public.
  2. Portions of the gas system—including customer-owned piping—may age without systematic reassessment.
  3. Increased density, redevelopment, and construction activity elevate the consequences of gas failures.
  4. Existing regulatory frameworks do not provide city-specific visibility into system condition or replacement progress.
  5. Periodic reporting and targeted evaluation improve public safety without assuming utility or private ownership responsibilities.

Section 2. Annual Gas Utility Accountability Briefing

2.1 Requirement

Each gas utility operating within the City shall provide an Annual Gas Infrastructure Accountability Briefing to the City Council or its designated committee.

2.2 Scope

The briefing shall address, at a minimum:

  • Pipeline materials and age profile;
  • Replacement progress and future plans;
  • Leak detection, classification, and repair performance;
  • High-consequence areas and impacts of development;
  • Construction coordination and damage prevention;
  • Emergency response readiness and communication protocols.

2.3 Format and Standards

  • Briefings shall include written materials, maps, and data tables.
  • Metrics shall be presented in a year-over-year comparable format.
  • Information shall be technical, factual, and suitable for governance decision-making.

2.4 No Transfer of Liability

Nothing in this section shall be construed to transfer ownership, maintenance responsibility, or operational control of gas facilities to the City.


Section 3. High-Occupancy Gas Safety Evaluations

3.1 Covered Facilities

Gas safety evaluations are required for the following facilities, whether publicly or privately owned:

  • Schools (public and private)
  • Daycare facilities
  • Nursing homes and assisted-living facilities
  • Hospitals and medical clinics
  • Multifamily buildings exceeding [X] dwelling units
  • Assembly occupancies exceeding [X] persons
  • Restaurants and commercial food-service establishments
  • Other facilities designated by the Fire Marshal as high-consequence occupancies

3.2 Scope of Evaluation

Evaluations shall assess:

  • Condition and materials of post-meter gas piping
  • Corrosion potential and structural support
  • Stress at building entry points and foundations
  • Evidence of undocumented modifications or abandoned piping
  • Accessibility, labeling, and operation of shutoff valves

3.3 Qualified Evaluators

Evaluations shall be conducted by:

  • Licensed plumbers,
  • Licensed mechanical contractors, or
  • Professional engineers with gas system experience.

3.4 Triggers

Evaluations shall be required upon:

  • Change of occupancy or use;
  • Major remodels or building additions;
  • Buildings reaching [X] years of age when permits are issued;
  • Repeated gas odor complaints or leak responses;
  • Sale or transfer of covered properties, if adopted by the City.

Section 4. Documentation and Compliance

4.1 Certification

Property owners shall submit documentation certifying completion of required evaluations.

4.2 Corrective Action

Identified hazards shall be corrected within timeframes established by code officials.

4.3 Enforcement

Non-compliance may result in:

  • Withholding of permits or certificates of occupancy;
  • Temporary suspension of approvals;
  • Administrative penalties as authorized by law.

Section 5. Education and Coordination

The City shall:

  • Provide educational materials clarifying ownership and safety responsibilities;
  • Coordinate with gas utilities on public outreach;
  • Integrate findings into emergency response planning and training.


**Appendix B

Annual Gas Utility Accountability Briefing — Preparation Checklist**

This checklist ensures annual briefings are consistent, measurable, and focused on risk reduction rather than general compliance.


I. System Inventory & Condition

☐ Total pipeline miles within city limits (distribution vs. transmission)
☐ Pipeline miles by material type
☐ Pipeline miles by decade installed
☐ Location and extent of remaining legacy materials
☐ Identification of oldest segments still in service


II. Replacement Progress

☐ Miles replaced in the previous year (by material type)
☐ Five-year replacement plan with schedules
☐ Funded vs. unfunded replacement projects
☐ Year-over-year reduction in legacy materials
☐ Explanation of changes from prior plans


III. Leak Detection & Repair Performance

☐ Total leaks detected (normalized per mile)
☐ Leak classification breakdown
☐ Average and maximum repair times by class
☐ Repeat leak locations identified and mapped
☐ Root-cause analysis of recurring issues


IV. Monitoring Technology

☐ Detection technologies currently deployed
☐ Survey frequency achieved vs. required
☐ Use of advanced or emerging detection tools
☐ Known limitations of monitoring methods


V. High-Consequence Areas

☐ Definition and criteria for high-consequence zones
☐ Updated risk maps
☐ Impact of new development on risk profile
☐ Trunk lines serving rapidly densifying areas


VI. Construction & Damage Prevention

☐ Third-party damage incidents
☐ 811 ticket response performance
☐ High-risk project types identified
☐ Coordination procedures with City capital projects


VII. Emergency Response Readiness

☐ Incident response timelines
☐ Coordination with fire, police, and emergency management
☐ Date and scope of last joint exercise or drill
☐ Public communication and notification protocols


VIII. Customer-Owned (Post-Meter) Piping

☐ Incidents involving post-meter piping
☐ Common failure materials or conditions
☐ Customer education and outreach efforts
☐ Voluntary inspection or assistance programs


IX. Forward-Looking Risk Assessment

☐ Top unresolved risks
☐ Areas of greatest concern
☐ Commitments for the next 12 months
☐ Clear answer to:
“Are we safer than last year—and why?”


Closing Note

A briefing that cannot complete this checklist is not incomplete—it is revealing where risk remains unmanaged.

That visibility is the purpose of accountability.