Mexico’s Cartel System: What Just Happened — and What Comes Next

A collaboration between Lewis McLain & AI

I. The Cartel Landscape: Not a Pyramid, but a Web

Mexico’s cartel world is not one giant mafia with a single throne. It’s a shifting network of powerful criminal organizations, splinter groups, regional franchises, and temporary alliances.

The two most dominant forces in recent years:


🔵 Sinaloa Cartel

https://images.openai.com/static-rsc-3/ukoWtrQOLtg9loDUYUzG_Ju30G_tEXrrmR_XYo7B2YHyKzsjmQknruYE844fdZw9bkQwFOHbVLVNvcJlYnCN9oZsQAHyJuoJdYRdBK65Mlw?purpose=fullsize&v=1
https://images.openai.com/static-rsc-3/6cxihUAHBcVDJtvbR7rZ03NRdSyy2eddnSumzxeNj0H70yv4dh9_RprnUaLWKIMJck1kGdRIT0dsfZR7GA7GJ9yDUWma_ArKvL1luvSbhIc?purpose=fullsize&v=1
https://www.ice.gov/sites/default/files/2025-05/250514sandiego3.png
  • Deep international smuggling infrastructure
  • Major fentanyl and meth production
  • Historically associated with Joaquín “El Chapo” Guzmán
  • Currently fragmented into powerful factions

Sinaloa built a reputation for operational sophistication. Less theatrical than some rivals — but massively global.


🔥 Jalisco New Generation Cartel (CJNG)

https://images.openai.com/static-rsc-3/eHr1ZEsTY9Fwt9r4ziUV__5uDrNcHeCSNyNMY6QBiRhK5V-pkab3MMs2E6EkfAbDc5UHjnpYOQrb9yopAIo99AuzqtOeV2lVec-15QHb-qA?purpose=fullsize&v=1
https://cloudfront-us-east-1.images.arcpublishing.com/opb/KPX6VUWEY5PO3OFCQW57TCVN2M.jpg
https://img.lemde.fr/2023/12/09/0/0/5500/3666/1440/960/60/0/26284eb_2023-12-09t110317z-1086968103-rc2ubo9rlm6i-rtrmadp-3-mexico-usa-guns.JPG
  • Rapid expansion since ~2010
  • Militarized posture
  • Heavy weapons and armored convoys
  • Led until now by Nemesio Oseguera Cervantes (“El Mencho”)

CJNG grew aggressively, often clashing directly with Sinaloa and absorbing weaker groups.

Other significant players include:

  • Gulf Cartel
  • Los Zetas (and its remnants)
  • Beltrán-Leyva Organization
  • La Familia Michoacana and splinters

But the modern battlefield has increasingly been Sinaloa vs. CJNG.


II. The Immediate Story: El Mencho Reportedly Killed

Mexico’s military reports that El Mencho was killed in a targeted operation in Jalisco.

If confirmed and sustained (details often evolve in cartel cases), this is one of the most consequential blows to a Mexican criminal organization in over a decade.

What follows such events historically?

  1. Internal succession battles
  2. Splinter factions breaking off
  3. Short-term violence spikes
  4. Rival cartels testing territory

The removal of a kingpin rarely ends a cartel. It destabilizes it.

Think less “collapse” and more “fragmentation under pressure.”


III. The Rumored “Agreement”: Kill Each Other, Leave Civilians Alone?

After events like this, a familiar story resurfaces:

Cartels are allowed to fight each other as long as they avoid harming citizens and especially tourists.

Let’s analyze that soberly.

Is there a formal agreement?

No verified evidence supports a nationwide, formal agreement between the Mexican federal government and cartels allowing violence under conditions.

Such a policy would amount to institutionalized impunity. No credible documentation supports that claim.

Is there informal tolerance in some regions?

Corruption absolutely exists at local levels. In certain historical periods — particularly before the mid-2000s — analysts describe something closer to “managed containment”:

  • Violence discouraged if it disrupted economic stability
  • Trafficking routes quietly tolerated
  • Public spectacle minimized

But that was not a moral contract. It was corruption plus centralized political control.

When political centralization weakened, so did that equilibrium.

Why does the tourist-protection idea persist?

Economics.

Cartels are businesses with guns. Tourism generates billions. Killing tourists invites:

  • Federal troop deployments
  • International pressure
  • Economic backlash
  • Media spotlight

So many groups avoid unnecessary attention in resort zones — not because of ethics, but incentives.

Yet civilians absolutely die every year in large numbers:

  • Extortion victims
  • Journalists
  • Politicians
  • Migrants
  • Bystanders in crossfire

Homicide data alone disproves the idea of a functioning “civilian shield” agreement.

Organized crime sometimes acts rationally. It does not act morally.


IV. Why Fentanyl Changed Everything

One reason the cartel landscape has grown more violent is the fentanyl economy.

Fentanyl is:

  • Synthetic
  • Extremely cheap to produce
  • Highly profitable
  • Compact and easy to transport

Unlike plant-based drugs (marijuana, heroin), fentanyl production depends more on chemical supply chains than farmland.

That lowers entry barriers and increases fragmentation.

More actors can compete.

More actors compete → more turf wars.


V. Where This Is Heading

El Mencho’s death, if solidly confirmed, likely produces one of four trajectories:

1️⃣ CJNG Consolidates Under a Successor

A lieutenant quickly stabilizes control. Violence spikes briefly, then normalizes.

2️⃣ Fragmentation

CJNG splits into regional factions fighting each other and Sinaloa. Violence increases.

3️⃣ Sinaloa Expansion

Sinaloa factions exploit instability to absorb territory.

4️⃣ Federal Escalation

Mexico increases military deployments, temporarily suppressing overt conflict.

History suggests fragmentation is most common after a kingpin removal.

And fragmentation increases unpredictability.


VI. The Bigger Structural Issue

Cartels exist at the intersection of:

  • U.S. drug demand
  • Weak local governance in some regions
  • Corruption vulnerabilities
  • Enormous profit margins

Removing leaders addresses symptoms. It rarely addresses incentives.

Until the demand side shifts, the profit engine keeps running.

This is not a story of villains in isolation. It is a story of transnational economics, political systems, and power vacuums.


The Uncomfortable Prediction

Short term:
Expect turbulence in Jalisco and contested corridors.

Medium term:
Watch for internal CJNG fractures or aggressive Sinaloa positioning.

Long term:
Unless structural incentives change, the system adapts. It always has.

Criminal ecosystems evolve the way markets evolve.

And markets — legal or illegal — follow incentives.

The Day After Presidents’ Day

A collaboration between Lewis McLain & AI

Washington, Lincoln, and the Work That Remains

Presidents’ Day passes quietly.

The sales end. The long weekend dissolves. The banners come down. By Tuesday morning, the marble figures return to their pedestals, and the Republic resumes its ordinary rhythm — traffic lights blinking, council meetings convening, paperwork accumulating.

And yet something deeper lingers.

Presidents’ Day is not simply a celebration of personalities. It is a reminder of two different kinds of leadership embodied most clearly in George Washington and Abraham Lincoln.

Washington represents restraint.
Lincoln represents moral endurance.

Together they frame the American experiment.

Washington: The Discipline of Restraint

Washington’s greatest act was not winning a war. It was relinquishing power.

In his Farewell Address, he warned the young nation about the dangers of faction, the seduction of foreign entanglements, and the slow corrosion of civic virtue. He feared that partisan spirit would divide citizens into camps more loyal to party than to country. He urged unity not as sentiment, but as structural necessity.

Here is his counsel in poetic form:


Washington’s Farewell

A Poetic Rendering

Friends and fellow citizens,
The hour approaches
When you must choose again
The bearer of executive trust.

I will not be among the candidates.

Not from indifference—
But from conviction
That no republic should depend
Too long upon one man.

Cherish the Union.

You are one people—
Bound not by region,
But by shared sacrifice
And shared destiny.

In unity is strength.
In division, vulnerability.

Beware the spirit of party.

Faction flatters,
Then divides.
It inflames passions,
Distorts truth,
And opens doors
To foreign influence.

Cultivate virtue.

Liberty without moral restraint
Cannot stand.

Promote knowledge.
Respect the Constitution.
Let change come lawfully.
Keep power within its bounds.

Trade with all.
Entangle with none.

If I have erred,
Count it human frailty.

May the Union endure—
Not by force of one,
But by restraint of all.


Washington feared instability born of excess ambition. His genius was sobriety.

But history would test the Union more severely than even he imagined.

Lincoln: The Burden of Mercy

If Washington guarded the structure, Lincoln confronted its fracture.

The Civil War forced the nation to confront its founding contradiction — liberty proclaimed, slavery practiced. Lincoln did not speak with Washington’s caution. He spoke with grief, gravity, and moral resolve.

Here is Lincoln’s voice rendered in verse, drawn from Gettysburg and the Second Inaugural:


Lincoln’s Counsel

A Poetic Rendering

Four score and seven years ago
A nation was born—
Conceived in liberty,
Dedicated to equality.

That proposition
Was tested by war.

Brother against brother.
Fields turned red.
A Union strained
To the breaking.

Both prayed to the same God.
Both asked victory
Of the same Heaven.

The prayers could not both be answered.

If every drop drawn by the lash
Must be repaid
By another drawn by the sword—
So be it.

Justice is not hurried.
It is measured.

But hear this:

With malice toward none,
With charity for all,
With firmness in the right
As God gives us to see the right—

Let us bind up the nation’s wounds.

Care for him who bore the battle.
Finish the work.

Government of the people,
By the people,
For the people—
Shall not perish—

If the people
Choose endurance
Over bitterness.


Lincoln’s greatness was not only in preserving the Union, but in insisting that reconciliation must accompany victory.

Washington taught restraint.
Lincoln taught mercy.

The Day After

So what happens the day after Presidents’ Day?

The Republic does not survive on marble.

It survives on habits.

On citizens who prefer limits over applause.
On leaders who accept lawful boundaries.
On neighbors who argue without dissolving.
On voters who remember that unity is not sentimental — it is structural.

The presidency is powerful. But the republic is larger.

The real ceremony begins when no one is watching.

When contracts are honored.
When power pauses because law requires it.
When disagreement does not become dehumanization.
When conscience tempers conviction.

Presidents’ Day is not about nostalgia. It is about continuity.

Washington reminds us that ambition must yield to constitutional order.
Lincoln reminds us that justice must be pursued without malice.

And Tuesday morning reminds us that the experiment continues.

Not by force of one.

But by restraint, mercy, and discipline in us all.

Peace Through Strength

A collaboration between Lewis McLain & AI

“Peace through strength” is not a slogan invented for campaign banners. It is a strategic theory older than the Roman legions and as modern as hypersonic missiles. The logic is stark: a nation that can decisively defend itself is less likely to be tested. Deterrence works not because war is desired, but because war is convincingly unwinnable.

The United States is currently investing in that logic at scale.

This is not a nostalgic rebuild of World War II mass armies. It is a systemic modernization of ships, aircraft, armored forces, and—most significantly—long-range precision fires. The aim is not simply more power, but smarter, deeper, and more survivable power.


The Naval Backbone: Sea Control in an Age of Competition

The U.S. Navy remains the central pillar of global deterrence. Maritime power is quiet until it is decisive. It guarantees trade routes, projects force without permanent occupation, and complicates adversaries’ planning before the first shot is ever fired.

Current investments include continued production of the Arleigh Burke-class destroyer, upgraded with enhanced radar systems, ballistic missile defense capabilities, and expanded vertical launch capacity. These ships are not merely hulls; they are floating missile batteries integrated into global sensor networks.

Subsurface dominance continues with the Virginia-class submarine—arguably the most stealthy conventional submarine class in the world. Newer blocks include improved acoustic stealth, payload modules for expanded cruise missile capacity, and enhanced undersea surveillance systems. Submarines are deterrence in its purest form: invisible, persistent, and unpredictable.

Shipbuilding budgets in recent fiscal cycles reflect sustained procurement and industrial base expansion. The strategy is clear: deterrence in the Pacific and Atlantic requires numbers, resilience, and distributed lethality.

Peace, at sea, depends on dominance beneath it.


Air Superiority: From Fifth to Sixth Generation

Air power remains the fastest form of strategic messaging.

The F-35 Lightning II continues to expand across U.S. services. Its defining feature is not just stealth—it is sensor fusion. The aircraft collects data from radar, infrared systems, electronic warfare sensors, and off-board sources, presenting a single integrated battlefield picture to the pilot. In modern combat, information dominance often determines survival before missiles are ever launched.

Beyond the F-35 lies the Next Generation Air Dominance program—sometimes referred to in open sources as a sixth-generation fighter concept. These aircraft are expected to integrate AI-assisted decision systems, collaborative drone “wingmen,” advanced propulsion for greater range, and even more sophisticated electronic warfare capabilities.

The trend is unmistakable: air power is shifting from platform-centric warfare to network-centric warfare. Aircraft are becoming nodes in a combat web, sharing data instantly across services.

Deterrence in the sky now depends as much on bandwidth as on bombs.


Armored Forces: Modernizing the Heavy Fist

On land, the United States continues modernization of the M1 Abrams platform. Upgrades focus on survivability (improved armor packages and active protection systems), power management (to reduce fuel burden and electronic strain), and digital battlefield integration.

The tank’s role in modern war is debated by analysts, but its deterrent symbolism remains potent. Armor projects resolve. It reassures allies. It complicates adversaries’ calculus. A credible heavy force makes conventional invasion far less appealing.

But the most dramatic transformation on land is not the tank.

It is artillery.


The Artillery Revolution: Range, Precision, and Depth

For decades, traditional U.S. tube artillery reached roughly 30–40 kilometers with unguided shells. Modernization efforts are rewriting that geometry.

The M142 HIMARS platform now fires Extended Range Guided Multiple Launch Rocket System (ER GMLRS) munitions capable of roughly doubling previous rocket ranges—reaching well beyond 100 kilometers in testing.

That is not a marginal increase. That is a 2× expansion of battlefield depth.

Precision Strike Missile (PrSM) programs go further. The Precision Strike Missile replaces older ATACMS systems with significantly longer range and improved targeting flexibility. These missiles push ground-based strike capability hundreds of kilometers forward without requiring aircraft penetration.

The shift is doctrinal as well as technical.

Modern artillery is becoming:

  • Longer ranged (2–5× over legacy systems in some categories)
  • Highly precise (meter-level accuracy via guidance kits)
  • Digitally integrated with drones and satellites
  • Faster to deploy and reload

This transforms artillery from “area suppression” into precision deep strike. It reduces the need for risky close-range engagements. It increases survivability through dispersion. It changes the calculus for adversaries who previously relied on sanctuary distance.

If artillery once shaped the tactical battlefield, it now influences operational and even strategic depth.

Peace, paradoxically, is strengthened when enemies know they cannot mass forces safely.


Industrial Base Expansion: The Quiet Multiplier

One often overlooked dimension of strength is production capacity.

Recent budgets have increased funding not only for procurement but also for expanding manufacturing lines for munitions, missiles, and naval components. Artillery shell production, for example, has grown significantly compared to pre-Ukraine war baselines.

Deterrence requires not just weapons—but the capacity to replace them.

A nation that can surge production dissuades prolonged conflict. Attrition warfare becomes unattractive when one side can replenish faster.

Strength is not merely hardware. It is industrial endurance.


Why “Peace Through Strength” Still Resonates

Critics sometimes argue that military buildup invites arms races. That risk is real. History is full of miscalculations. But weakness also invites testing. The absence of credible capability can tempt opportunism.

The philosophical core of “peace through strength” rests on three assumptions:

  1. War is costly and uncertain.
  2. Rational actors avoid unwinnable fights.
  3. Credible capability shapes behavior before violence begins.

The current U.S. modernization effort suggests policymakers believe deterrence requires:

  • Dominant naval presence
  • Persistent air superiority
  • Survivable armored forces
  • Deep, precise ground fires
  • Industrial resilience

The emphasis on advanced features—AI integration, sensor fusion, extended range, precision guidance—indicates a belief that quality matters as much as quantity.

In earlier eras, strength meant bigger fleets. Today it means networked lethality and distributed survivability.


The Strategic Reality

Peace is not maintained by hope alone. It is maintained by perception.

When adversaries calculate, they weigh probability of success. Modern U.S. investments—longer-range artillery, stealthier submarines, integrated fighters, digital armor—are designed to alter that calculation decisively.

The theory is not that war becomes impossible.

The theory is that war becomes irrational.

And if that theory holds, then the enormous investments underway are not preparations for aggression, but insurance against misjudgment.

In the end, “peace through strength” is less about dominance and more about clarity. It is a message delivered not in speeches, but in steel, silicon, propulsion, and range tables.

The hope is simple: that visible strength makes invisible wars unnecessary.

Nipah Virus: A Quiet Threat, A Loud Warning

A collaboration between Lewis McLain & AI

It seems like yesterday that I was in conversation with our Granddaughter, Lily, a high schooler. She is now a junior in the architecture program at Texas Tech. She casually mentioned they are studying diseases in some class. A day or two later I read an article that did not have front page prominence. It was about something called Covid, except it was not the beer sounding version. I forwarded it to Lily and with amusement noted it was funny to read this so soon after our discussion. I had no clue.

In late January 2026, health authorities confirmed an outbreak of the deadly Nipah virus in the Indian state of West Bengal, prompting heightened surveillance and airport screening in parts of Asia. This marks the first confirmed outbreak in that region since 2007 and has focused global attention on a pathogen that, while rare, embodies the existential tension between humans and the microbial world.

The Washington Post reported that two confirmed cases have been identified and nearly 200 close contacts are being monitored. Authorities in India have initiated enhanced surveillance, lab testing, and field investigations to contain the spread. Despite a historically high fatality rate—estimated between 40 and 70 percent by the U.S. Centers for Disease Control and Prevention—there has been no large-scale spread beyond the initial cluster, and public health officials globally stress that the risk of a pandemic remains low if control measures are maintained.


What the Nipah Virus Is

At its core, Nipah virus (NiV) is an RNA virus in the Henipavirus genus, a biological category shared with the related Hendra virus. It is a highly pathogenic paramyxovirus: the genetic material is single-stranded RNA, and the virus has an envelope that facilitates entry into host cells. Its natural reservoir is fruit bats—particularly Pteropus species, often known as “flying foxes.”

This bat association is not incidental: bats host a remarkable diversity of viruses, from coronaviruses to filoviruses, without showing disease symptoms themselves. That fact has made bats a central focus of zoonotic disease research since the first major recognition of Nipah in 1999.


What “Zoonotic” Means

To understand Nipah, we need to treat zoonotic disease not as an exotic category, but as a foundational principle of infectious disease ecology. A zoonotic pathogen is one that originates in animals and spills over into humans. Humans are not the natural host; we are accidental adaptors.

Zoonosis is a scientific word with real force:

  • “Zoo-” refers to animals
  • “-notic” refers to illness

When a virus moves from its usual animal host into humans, that jump is termed a spillover event. Those events require specific ecological conditions: close contact with infected animals, suitable viral traits, and susceptible human hosts. Spillover is not a rumor in biology; it’s a measurable dynamic of host–pathogen interactions.

In the case of Nipah, the primary reservoirs are fruit bats. Transmission to humans typically occurs through:

  • Contaminated food, like raw date palm sap touched by bats;
  • Contact with infected livestock, particularly pigs;
  • Direct person-to-person transmission through bodily fluids during close care.

Historical Outbreaks and Patterns

Nipah was first recognized in Malaysia and Singapore in 1998–1999, where pig farmers and workers developed severe respiratory and neurological disease after exposure to infected pigs. That outbreak resulted in hundreds of human cases and prompted the culling of more than a million pigs to stop transmission.

Since then, outbreaks have been reported in South Asia almost every year, particularly in Bangladesh and India, often during the winter months. There, raw date palm sap collection—a traditional practice—can bring humans into contact with bat-contaminated surfaces, enabling spillover.

In Kerala, India, repeated outbreaks (in 2018, 2021, 2023, and 2024) have shown both the virus’s persistence and the benefits of vigilant public health responses.


Biology and Human Disease

Once Nipah infects a human, its clinical course is brutal. Early symptoms resemble common viral infections—fever, headache, muscle pain, cough—but the disease can rapidly escalate to:

  • Encephalitis (inflammation of the brain)
  • Severe respiratory distress
  • Seizures
  • Coma
  • Death

Symptoms usually appear 3–14 days after exposure, but the incubation can extend longer in rare cases. Even survivors can suffer long-term neurological sequelae.

Unlike seasonal influenza or many coronaviruses, Nipah is not generally airborne over long distances. Transmission is most efficient via direct contact with infectious fluids or droplets at close range. That distinction matters: airborne viruses spread rapidly and widely; contact-based spread, while dangerous, is more containable.


Current Outbreak, Surveillance, and Public Response

Today’s headlines remind us why epidemiologists remain vigilant: the confirmed cases in West Bengal have reactivated surveillance networks and border health checks. Airports in Southeast Asia are screening travelers from affected areas, and neighboring countries, including Thailand and Taiwan, are treating Nipah seriously because of the virus’s lethal potential—even if the outbreak remains limited at present.

China’s state media also reported no detected cases in China but acknowledged the risk of imported infection—illustrating how nations that had no local outbreak still feel the ripple effects of these events.


No Cure, No Vaccine—Yet

One of the most sobering facts is that there is no widely approved vaccine or specific antiviral treatment for Nipah virus infection. Care today is supportive and resource-intensive—focused on managing symptoms rather than curing the infection.

Research continues on multiple fronts:

  • Monoclonal antibody therapies
  • Vaccine candidates
  • Antiviral drugs with cross-pathogen potential

Progress is uneven because the rarity of the disease makes large clinical trials difficult. This is the paradox of “rare but severe”: scientific urgency clashes with logistical constraints and market incentives.


Ecosystems, Agriculture, and the Human Footprint

If Nipah teaches one ecological lesson, it is that pathogens do not arise in a vacuum. Human agricultural practices, deforestation, and settlement expansion increasingly bring people into contact with wildlife reservoirs. Bats inhabit the edges of orchards, farms, and human dwellings. Our food systems—date palm sap collection, pig farming—create interfaces where spillover becomes possible.

In a way, the story of Nipah is also a story about how human choices shape disease landscapes. Without those choices—without farms near bat roosts, without wildlife encroaching on human spaces—spillovers would be less frequent.


Looking Ahead: Preparedness, Not Panic

The world’s experience with COVID-19 focused global attention on infectious disease risk. In that broader lens, Nipah occupies a cautionary niche: rare, deadly, and containable—if recognized early and acted upon rapidly. It reminds public health systems why surveillance networks, laboratory capacity, quarantine infrastructure, and clear communication are not luxuries but pillars of resilience.

Today’s outbreak in India underscores this truth: early identification, contact tracing, and containment have limited spread so far. That success should not be mistaken for insignificance. It is a testament to preparedness, not proof that the threat isn’t real.


Nipah virus sits at the crossroads of virology, ecology, public health, and human behavior. Studied deeply, it reveals not only the mechanics of a dangerous virus but also the dynamics that allow viruses to leap across species boundaries. It’s less a distant exotic worry and more a living example of the complex interactions between humans, animals, and the microbial world—a reminder that in a connected biosphere, what happens in bat roosts and date palm groves can matter globally.

Is the U.S. Murder Rate Really the Lowest Since 1900?

A collaboration between Lewis McLain & AI

Every few decades, crime statistics break through assumption and force a pause. The current claim — that the U.S. murder rate is the lowest since 1900 — is one of those moments. It sounds implausible to many ears trained by years of grim headlines. Yet when examined carefully, the claim is largely true, technically defensible, and easy to misunderstand.

This essay follows the long arc: what the data show, how far back they truly reach, and what this moment does — and does not — mean.


The claim in plain terms

Preliminary national data for 2025 suggest a homicide rate near 4.0 deaths per 100,000 people. If finalized at that level, it would be lower than any recorded national homicide rate going back to at least 1900, the earliest point at which scholars can reconstruct reasonably comparable nationwide estimates.

That sentence carries weight — and caveats.


A century-long arc of violence

Viewed across time, American homicide follows a revealing pattern:

  • Early 1900s: Rates around 6 per 100,000, shaped by weak policing, widespread alcohol violence, and rudimentary emergency medicine.
  • 1920s–1930s: A sharp rise during Prohibition and the Great Depression, often exceeding 9 per 100,000.
  • Post–World War II: A calmer interlude. The 1950s hover near 4.5–5.0, later remembered — somewhat romantically — as “normal.”
  • 1965–1995: The great surge. Drugs, urban decay, demographic pressure, and social upheaval push homicide above 10 per 100,000 at its early-1990s peak.
  • 1995–2019: A long, steady decline — one of the most important and underappreciated social trends of the past half-century.
  • 2020–2021: A pandemic shock. Murders spike sharply amid disruption, isolation, and institutional strain.
  • 2022–2025: A rapid correction. Rates fall faster than almost any prior post-crisis period.

If the current estimates hold, the country has not merely returned to pre-pandemic levels — it has dropped below every reliably documented year of the modern statistical era.

https://compote.slate.com/images/3def23ec-6e9e-4f25-94ae-3d13c2a793f0.png?width=1200
https://marginalrevolution.com/wp-content/uploads/2011/05/Violence-Stylized-2.png

The shape of the curve matters. The late twentieth century was not the baseline. It was the anomaly.


Why “since 1900” is accurate — and fragile

The phrase survives scrutiny because homicide is the cleanest crime statistic across time. A body produces paperwork. Murder is difficult to ignore, redefine, or quietly erase. That makes it uniquely suitable for long-run comparison.

Still, this is not a laboratory experiment:

  • Early data were reconstructed, not digitally logged.
  • Reporting varied across states and cities.
  • Medical advances matter: many assaults that would have been fatal in 1905 are survivable today.
  • Definitions evolved, though less for homicide than for other crimes.

These limitations do not negate the claim. They simply mean the statement rests on recorded history, not perfect symmetry.


Why the drop is real (and not magical)

No serious analyst believes a single policy, politician, or police tactic “caused” the current low. Crime behaves like a system, not a switch. Several forces likely overlap:

  • Post-pandemic normalization: 2020–2022 were historically abnormal stress years.
  • Demographics: The high-risk young-male cohort is proportionally smaller than in the 1980s or 1990s.
  • Emergency medicine: Faster trauma response quietly reduces homicide totals.
  • Focused deterrence and technology: Less visible than mass incarceration, often more effective.
  • Stabilized illicit markets: Violence spikes when underground economies are disrupted; stability reduces turf conflict.

The sharpness of the decline suggests correction from an abnormal spike rather than the sudden creation of a new social order.


What this moment does not mean

It does not mean:

  • Violence is “solved.”
  • All communities experience safety equally.
  • The trend cannot reverse.
  • Any single ideology has been vindicated.

Crime remains cyclical, sensitive to shocks, and unevenly distributed.


The quieter insight

The deeper lesson is not about 2025. It is about memory.

Many Americans unconsciously treat the violence of the late twentieth century as normal because it coincided with their formative years. In truth, those decades were among the most violent in modern U.S. history. The long decline since the mid-1990s — interrupted but not erased by the pandemic — represents a structural shift away from that era.

If the current figures hold, the United States has crossed below even its early-twentieth-century baseline. That is not a promise about the future. It is evidence that large, complex societies can bend violent behavior downward — slowly, unevenly, and often without noticing until the data force us to look.

History rarely moves in straight lines. But sometimes, over the span of a century, it does bend — quietly, and further than our instincts expect.


Appendix A

How We Know (and What We Can and Cannot Claim)

Data sources

  • FBI Uniform Crime Reports (UCR) and preliminary National Incident-Based Reporting System (NIBRS) estimates (post-1930)
  • Historical criminology reconstructions for pre-1930 homicide rates
  • U.S. Census population normalization
  • Large-city trend analyses (e.g., Council on Criminal Justice)

Why homicide is used

  • Mandatory reporting
  • Minimal undercount
  • Stable legal definition
  • Cross-century comparability superior to other crimes

Known limitations

  • Early-20th-century figures are estimates
  • Improvements in trauma care reduce deaths independent of violence levels
  • City-level drops may exceed rural declines
  • Final national figures may revise slightly upward or downward

What would invalidate the claim

  • Final 2025 data significantly above ~4.3 per 100,000
  • Discovery of systematic early-20th-century undercounts large enough to reverse rank order (unlikely given existing scholarship)

What remains unresolved

  • Whether the decline stabilizes or rebounds
  • How much credit belongs to policing, technology, culture, or demography
  • Whether future shocks (economic, social, or political) reintroduce volatility

The World Health Organization: Limits of Global Health in a World That Won’t Be Governed

A collaboration between Lewis McLain & AI

The decision by the United States to withdraw from the World Health Organization did not simply reopen a policy debate. It exposed a deeper confusion that has long surrounded the institution itself. Critics and defenders often talk past one another, not because they disagree on facts, but because they carry different, usually unspoken assumptions about what WHO was ever meant to be.

Some imagine a global equivalent of the CDC, capable of decisive action and enforcement. Others fear a supranational authority imposing mandates across borders. In reality, WHO has always been something far more constrained—and far more revealing of the limits of modern international governance.

To understand why WHO struggled when it mattered most, and why the U.S. ultimately chose to leave, it is necessary to begin not with recent controversies, but with the idea that gave birth to the institution itself.


An Institution Born from Ruins

WHO was not created in a moment of optimism. It was created in a moment of exhaustion.

In the aftermath of World War II, infectious disease followed mass displacement and demobilization. Typhus, cholera, tuberculosis, and malaria crossed borders with ease. The war made one reality unavoidable: public health could no longer be treated as purely domestic.

In 1948, WHO was formally established, consolidating earlier international health efforts into a single global body. Its founding constitution declared that “the enjoyment of the highest attainable standard of health is one of the fundamental rights of every human being.” The moral ambition was expansive. The institutional design beneath it was deliberately narrow.

WHO was structured around three core principles:

  • Universal membership, even at the cost of compromise
  • Respect for national sovereignty, especially over internal affairs
  • Technical authority embedded within diplomacy, not above it

WHO would coordinate, not command. It would advise, not enforce. It would preserve access even when confrontation seemed justified.

This design reflected the political realities of the postwar world. Over time, it would also define WHO’s limits.


Science Without Sovereignty: The Core Tension

Every major outbreak reveals the same contradiction.

Governments want early warnings from others.
They hesitate to provide early warnings themselves.

Early disclosure risks economic disruption, political blame, and international stigma. Delay risks uncontrolled spread and preventable death. WHO operates inside this narrow corridor, dependent on the cooperation of member states whose incentives often cut against transparency.

When information flows freely, WHO appears effective. When it does not, WHO appears compromised—even when it lacks the authority to compel disclosure. COVID-19 did not create this tension. It forced it into view.


Scale, Capacity, and Misplaced Expectations

Public expectations of WHO have rarely aligned with its actual capacity.

WHO’s entire budget is comparable to that of a large hospital system, not a global emergency command. Its workforce—under ten thousand even before recent cuts—is spread across more than 160 countries, often embedded as advisors rather than operators.

WHO does not run hospitals, stockpile national reserves, or command laboratories. Expecting it to “control” a pandemic is akin to expecting a weather service to stop a hurricane. Its function is detection, interpretation, and communication—not coercion.


Funding, Crisis, and the Quiet Geometry of Power

One structural feature of WHO is essential to understanding its behavior: how it is funded.

Only a minority of WHO’s budget comes from mandatory, assessed contributions. The majority—well over two-thirds in recent cycles—comes from voluntary, earmarked funding, much of it tied to specific diseases, emergencies, or crises.

This matters because earmarked funding shapes priorities. Programs that attract donor interest expand. Emergencies become more fundable than prevention. Crisis, over time, becomes currency.

WHO leadership is acutely aware that alienating major contributors—financial or political—can have immediate operational consequences. This is not corruption. It is dependence.


China’s Role: Influence Without Formal Control

Within this funding and governance structure, China occupies a distinctive position.

China is not WHO’s largest financial contributor; historically, the United States filled that role. China’s influence flows instead from indispensability. As the world’s most populous nation and a central node in global travel and trade, China’s cooperation is essential for credible disease surveillance in East Asia and beyond.

This creates an asymmetry. WHO needs access to China more than China needs WHO.

That imbalance surfaced repeatedly:

  • in the careful language surrounding early COVID-19 transmission,
  • in the reluctance to escalate public warnings without Chinese confirmation,
  • and most visibly in the exclusion of Taiwan from formal WHO participation despite its advanced public-health infrastructure.

Taiwan’s exclusion was not a scientific judgment. It was the point at which universality collided with access. WHO chose access.


When Structural Limits Became Visible

COVID-19 was not merely a failure of response; it was a stress test of incentives.

WHO repeated early assurances from Chinese authorities, calibrated its language carefully, and delayed escalation. Subsequent reviews focused on technical delays and verification gaps. Less often discussed was why escalation felt institutionally dangerous.

Escalation threatened access.
Access threatened funding stability.
Funding threatened operational survival.

This was the moment when diplomacy, science, and finance converged—and constrained action.


What WHO Never Was

For clarity: WHO cannot impose laws, mandate lockdowns, or override governments. It is not a global sovereign. Its failures stem from weakness, not domination.

This distinction matters, because it reframes the question. The issue is not whether WHO failed to act like a global authority. It is whether the world ever empowered it to be one.


The U.S. Withdrawal: An Unspoken Calculation

Publicly, the U.S. cited accountability failures and stalled reform. Privately—and structurally—the concern ran deeper.

From a U.S. perspective, a paradox had emerged:

  • The U.S. paid more.
  • China constrained more.
  • WHO navigated carefully between them.

Reform efforts aimed at reducing earmarked funding, strengthening verification authority, or increasing mandatory dues stalled repeatedly. Member states, including China, showed little appetite for changes that diluted sovereignty or leverage.

Withdrawal thus became less about WHO itself and more about resetting leverage outside the institution—through bilateral surveillance, intelligence-linked monitoring, and allied coordination.

Whether that strategy proves superior remains to be seen.


What WHO Ultimately Reveals

WHO is neither villain nor savior. It is a mirror.

It reflects the difficulty of governing shared risk in a world that prizes autonomy, where transparency is costly and influence often outweighs candor. Its failures were not aberrations; they were predictable consequences of its design.

The U.S. decision to leave does not end global health coordination. It resets the stage. Existing channels will persist in altered form, new arrangements will be tested, and old assumptions will meet reality.

Whether this recalibration produces greater clarity, fragmentation, or a different kind of leverage will only be known over time—measured not by rhetoric, but by the handling of the next outbreak, or by the quiet success of early detection before one takes hold.


Appendices


Appendix A: Ebola in West Africa — The Cost of Waiting

Early warnings in 2014 reached WHO quickly. Action did not. Fear of overreaction delayed declaration. By the time a global emergency was declared, Ebola had spread across multiple countries. Health systems collapsed. Over 11,000 died.

Delay was not neutrality. It was a decision.


Appendix B: SARS — When Speed Beat Diplomacy

In 2003, WHO acted decisively — issuing alerts, coordinating labs, and recommending travel advisories without full political consensus. SARS was contained within months. The difference was timing, not authority.


Appendix C: A Model for Detection and Orderly Communication

A viable future model separates detection from declaration, uses probability ranges instead of false certainty, enforces structured communication cadence, preserves sovereignty while incentivizing transparency, and mandates after-action review. It does not eliminate tradeoffs. It prevents them from being resolved silently and politically.

Prohibition: America’s Great Moral Experiment—and the Courage to Undo It

A collaboration between Lewis McLain & AI

https://fourteeneastmag.com/wp-content/uploads/2020/01/ProhibitionHistory_wikicom.jpg

Prohibition stands as one of the most instructive chapters in American public life, not because it failed, but because it failed honestly—with good intentions, broad support, and devastating unintended consequences. It is a case study in how a democratic society wrestles with morality, law, and human behavior, and what it means to admit error without abandoning principle.

The Moral Confidence of the Early 20th Century

Prohibition did not emerge from fanaticism. It grew from reform.

By the late 1800s and early 1900s, alcohol was deeply entangled with social harm. Excessive drinking contributed to domestic violence, workplace injuries, chronic poverty, and political corruption. Saloons were often tied to exploitative labor practices and machine politics. Women, in particular, bore the costs at home with little legal protection.

The temperance movement brought together an unlikely coalition: Protestant churches, progressive reformers, women’s organizations, public-health advocates, and rural voters who viewed alcohol as an urban vice. Their logic was straightforward: if alcohol is a primary cause of social disorder, then eliminating alcohol will reduce disorder.

It was a classic Progressive Era belief—social problems have technical solutions, and law can accelerate moral improvement.

In 1919, that belief crystallized into the 18th Amendment. In 1920, Prohibition went into effect nationwide.

The Reality That Followed

The policy did not collapse overnight. It unraveled systemically.

First, consumption adapted rather than disappeared. Alcohol did not vanish; it went underground. Speakeasies flourished in cities. Home distillation surged in rural areas. The quality of alcohol often worsened, leading to poisonings and long-term health damage. Drinking became less visible but more dangerous.

Second, crime industrialized. Prohibition transformed alcohol from a regulated commodity into a high-margin illicit product. Criminal organizations stepped in to meet demand. Smuggling routes expanded. Violence became a business tool. What had once been localized criminal activity evolved into national syndicates with unprecedented resources.

Third, respect for the law eroded. Millions of ordinary Americans violated Prohibition laws casually and repeatedly. Enforcement became selective, uneven, and corruptible. Police officers, judges, and politicians were placed in impossible positions—expected to enforce a law that large portions of the public openly rejected.

This was not a moral awakening; it was a credibility crisis. When law drifts too far from lived reality, it stops teaching virtue and starts teaching evasion.

The Cost No One Planned For

Perhaps the most damaging consequence was institutional.

Prohibition weakened faith in governance itself. Citizens learned that laws could be aspirational rather than practical, symbolic rather than enforceable. The gap between public virtue and private behavior widened. Hypocrisy became visible, and cynicism followed.

The federal government also discovered its limits. Enforcing Prohibition required resources far beyond what Congress was willing to provide. Borders proved porous. Local governments resisted. States interpreted enforcement unevenly. The machinery of the state strained under the weight of moral ambition.

Prohibition revealed a hard truth: the state is powerful, but not omnipotent—and pretending otherwise corrodes trust.

Why Repeal Was the Real Achievement

The repeal of Prohibition in 1933 is more significant than its enactment.

Governments are adept at creating policy. They are far less adept at reversing it. Repeal required lawmakers and citizens alike to concede that a deeply moral project had produced deeply immoral outcomes—not because the goals were wrong, but because the method was flawed.

The 21st Amendment did not celebrate excess. It acknowledged complexity.

Repeal restored regulation rather than chaos. Alcohol returned to legal channels where quality could be controlled, taxes collected, and criminal enterprises disrupted. Public health and safety improved not because Americans became virtuous overnight, but because law once again aligned with human behavior.

This was not moral surrender. It was moral realism.

The Enduring Lesson

Prohibition is often remembered as a joke—speakeasies, gangsters, bathtub gin. That memory misses the point.

The real lesson is about limits:

  • The limit of law as a tool for shaping personal behavior
  • The limit of enforcement in a free society
  • The limit of certainty when policy meets culture

Prohibition teaches that durable reform moves in sequence: culture, then law—not the other way around. When law attempts to leapfrog culture, it creates shadow systems that are harder to govern and more dangerous than the original problem.

This is why Prohibition continues to echo in modern debates—over drugs, gambling, speech, and even technology. Different issues, same temptation: legislate the outcome rather than shape the conditions.

Why January 20 Matters

January 20, 1933, sits quietly in the historical calendar, but it marks a rare civic moment: a nation choosing correction over pride.

On a day associated with power transitions and public authority, the United States demonstrated something rarer than resolve—humility. It recognized that strength is not found in doubling down on a mistake, but in changing course before the damage becomes irreversible.

A Closing Reflection

Prohibition failed not because Americans rejected morality, but because morality cannot be mass-produced by statute. It must be cultivated, modeled, and supported by institutions that understand human nature rather than deny it.

That lesson is neither liberal nor conservative. It is simply hard-earned.

And it is one worth remembering—especially when certainty feels tempting and restraint feels weak.

The Day After: January 21, 1933 — When the Country Woke Up Sober

https://prohibition.themobmuseum.org/wp-content/uploads/2016/11/RepealCelebBarprohibition.jpg
https://jhgraham.com/wp-content/uploads/2017/03/april-7-1933-we-want-beer.jpg
https://i.etsystatic.com/12414326/r/il/8aeb63/2573703189/il_570xN.2573703189_5fnd.jpg

The repeal of Prohibition did not end with speeches or signatures. Its meaning unfolded the next morning.

On January 21, 1933—the day after repeal authority snapped back into place—America did not descend into revelry or collapse into vice. Instead, something quieter and more revealing happened: normal life resumed.

Bars did not instantly become lawless. Breweries did not flood streets with alcohol. Families did not unravel overnight. What returned was not excess, but legibility. Alcohol was no longer a rumor, a secret, or a criminal enterprise. It became visible again—regulated, taxable, inspectable, boring in the way lawful things usually are.

That boredom mattered.

From Illicit Thrill to Regulated Reality

Under Prohibition, alcohol carried the romance of defiance. Speakeasies thrived not merely because people wanted to drink, but because drinking had become a small act of rebellion. The day after repeal stripped alcohol of that mystique.

When something returns to daylight, it loses its glamour.

Legal beer—initially capped at low alcohol content—reappeared first. Breweries reopened cautiously. Distributors dusted off ledgers. States scrambled to design regulatory systems. Cities issued permits. Clerks checked licenses. Accountants sharpened pencils.

The machinery of ordinary governance restarted.

Crime syndicates, by contrast, began losing oxygen immediately. Without monopoly pricing and legal risk premiums, profits shrank. Violence became less “necessary.” The underground market contracted not because criminals found virtue, but because economics changed.

The day after repeal demonstrated a simple truth: regulation outcompetes prohibition when demand is durable.

A Subtle Restoration of Trust

Perhaps the most important change on January 21 was psychological.

For over a decade, millions of Americans had lived with a quiet contradiction: respecting the law in public while breaking it in private. The day after repeal lifted that tension. Citizens no longer had to pretend. Police no longer had to look away. Judges no longer had to perform moral arithmetic in sentencing.

The law once again described reality rather than denying it.

That alignment matters more than slogans. A legal system does not function on punishment alone; it functions on voluntary compliance. The day after repeal restored the possibility that citizens and institutions could once again inhabit the same moral universe.

What Did Not Happen

Equally instructive is what did not occur the day after repeal:

  • There was no national spike in chaos
  • No collapse of public morals
  • No evidence that restraint had been holding civilization together by its fingernails

Life continued. People went to work. Families ate dinner. The republic survived the admission of error.

That absence of catastrophe is itself an argument.

Why This Matters for a Modern Reader

Publishing this essay the day after January 20 invites an intentional parallel.

January 20 is about authority—who holds it, how it is transferred, how it is justified. January 21 is about what authority does once the ceremony is over. The day after asks a harder question than the day of:

Does policy still make sense when the speeches stop?

Prohibition failed that test. Repeal passed it.

The day after repeal reminds us that responsible governance is not measured by how dramatic a law sounds at enactment, but by how quietly society functions once it is in force.

A Final Reflection to Close the Essay

The repeal of Prohibition did not make America virtuous. It made America honest—about human behavior, about enforcement limits, and about the difference between moral aspiration and civic design.

The day after repeal, the country woke up without a grand illusion—and discovered it could still stand.

That may be the most encouraging lesson of all.

Davos and the World Economic Forum: A Plain-Spoken Guide for the Curious

A collaboration between Lewis McLain & AI

Every January, headlines begin to murmur about a small Alpine town in Switzerland where presidents, prime ministers, billionaires, activists, and journalists gather in winter coats and sensible boots. The place is Davos. The occasion is the annual meeting of the World Economic Forum.

For many people, what they hear sounds mysterious, elite, or faintly ominous. For others, it sounds like empty talk in a luxury setting. Most people simply want to know: what is this thing, who’s there, and why does it matter?

This essay is written for that middle ground—the reader who knows little, hears a lot, and wants a clearer picture without conspiracy or cheerleading.


What the World Economic Forum actually is

The World Economic Forum is not a world government. It cannot pass laws, levy taxes, deploy troops, or compel nations or companies to do anything. It is an international nonprofit organization based in Geneva whose central purpose is to convene people who rarely sit in the same room: political leaders, business executives, academics, civil-society leaders, technologists, and journalists.

Its core belief is simple: many of the biggest problems of modern life—financial instability, pandemics, climate change, technological disruption—do not respect borders or sectors. Governments alone cannot solve them. Markets alone cannot solve them. NGOs alone cannot solve them. The Forum exists to provide a neutral place where these worlds collide, talk, argue, and sometimes align.

That makes the Forum a platform, not a power. Its influence comes from who attends and what conversations happen—not from any formal authority.


How Davos became Davos

The Forum began modestly in 1971, founded by German economist Klaus Schwab as the European Management Forum. The early meetings focused on helping European companies learn modern management practices. Davos, a quiet mountain town, was chosen deliberately: remote enough to keep people focused, neutral enough to avoid national dominance.

Over time, as globalization accelerated, business problems became political problems, technological problems became ethical problems, and economic decisions began shaping entire societies. The Forum expanded with the world it was trying to understand.

What started with a few hundred executives grew into a global gathering. Today, the annual meeting typically brings about 2,500–3,000 participants from more than 130 countries, including dozens of heads of state and government, hundreds of CEOs, leaders of international organizations, researchers, activists, and several hundred journalists. It is large—but intentionally capped to remain workable rather than sprawling.


What actually happens there

The popular image of Davos is a series of panel discussions filled with polished talking points. Those panels do exist, and they are public-facing for a reason: they help surface ideas and set agendas.

But the real substance happens elsewhere.

Davos is designed for density of interaction. Leaders move between formal sessions, small working groups, bilateral meetings, and unplanned conversations in hallways and cafés. Many of these meetings are private and off the record—not because secrets are being plotted, but because frank conversation is impossible when every sentence becomes a headline.

No binding decisions are made. No treaties are signed. What does happen is relationship-building, early alignment, and problem-definition. In global affairs, those are often the invisible first steps before any formal action occurs later through governments, markets, or institutions.


What the Forum has actually achieved

It’s fair to say the World Economic Forum has not “solved” the world’s problems. Anyone claiming otherwise should be met with raised eyebrows. Its contributions are subtler.

First, the Forum is exceptionally good at agenda-setting. Ideas such as stakeholder capitalism, ESG reporting, global health coordination, and AI governance gained early prominence at Davos before moving into boardrooms and legislatures.

Second, the Forum has served as an incubator for cooperation. It has helped launch or align initiatives in areas like vaccine access, climate finance, and cybersecurity norms by bringing public and private actors together before formal mechanisms existed.

Third, Davos has functioned at times as an informal diplomatic space. Leaders from rival nations have used it to test ideas, reduce misunderstandings, or reopen channels of communication. These moments rarely make headlines, but they matter precisely because they happen before crises harden into policy.

In short, Davos doesn’t produce outcomes the way elections or treaties do. It produces conditions under which outcomes later become possible.


The criticisms—and why they persist

Criticism of Davos is not irrational. It is, by design, an elite gathering. Many participants arrive by private jet to discuss inequality, climate change, or social strain. The optics are unavoidable, and resentment is understandable.

There is also a persistent frustration that Davos produces more talk than action. That criticism confuses a forum with an executive authority—but it still lands emotionally, because people want visible results.

Finally, there is the concern that some voices—particularly from poorer countries or grassroots movements—struggle to compete with corporate and state power. The Forum has tried to broaden participation, but the imbalance remains a legitimate tension.

These critiques don’t mean Davos is useless. They mean it is limited, and that limitation should be understood rather than ignored.


The bottom line

The World Economic Forum is neither a secret government nor an empty spectacle. It is a tool—an imperfect one—for convening global influence in one place and forcing conversations that rarely happen elsewhere.

Davos matters not because it commands the world, but because it reflects it. The same tensions people feel about globalization, inequality, power, and accountability show up there in concentrated form. That makes it an easy target—and also a useful mirror.

In a fragmented age, the experiment of bringing rivals, allies, critics, and skeptics into the same snowy town continues not because it is ideal, but because no better alternative has yet emerged. Davos doesn’t promise solutions. It offers something rarer and more fragile: the possibility that people with power might listen to one another before deciding what to do next.


Appendix A: Security, Protest, and Public Order at Davos

One of the most common questions people ask—often with suspicion—is: How can so many powerful people gather without turning the place into a fortress?

Security at Davos is led almost entirely by Swiss public authorities, not private forces. Swiss federal and cantonal police, local Davos police, and Swiss Army units operate in support roles such as airspace monitoring, logistics, and rapid response. Visiting leaders bring their own close-protection teams, but overall coordination remains Swiss.

The approach is layered and restrained. Davos is a small, geographically isolated town with limited access routes, which allows authorities to manage entry into the town rather than militarize individual buildings. Accreditation controls, police presence, and venue security form concentric rings, while the overall posture emphasizes predictability and calm rather than intimidation.

Protests are not banned. Switzerland strongly protects the right to assembly. Demonstrations are permitted with advance coordination, designated areas, and agreed routes. Police focus on separation and de-escalation, not suppression. As a result, protests at Davos are usually visible, peaceful, and orderly—more expression than confrontation.

Security at Davos works not because it is overwhelming, but because it is boringly competent.


Appendix B: Who Sets the Agenda?

The Forum’s agenda is not improvised, nor dictated by any single government or corporation.

At the top is a Board of Trustees, responsible for mission, long-term direction, and governance. The board does not choose individual panel topics or speakers, but it defines strategic priorities—the big questions the Forum believes the world must confront in the coming years.

Turning those priorities into an annual theme and program is handled by executive leadership, standing expert networks, and ongoing consultation with governments, international organizations, companies, and research institutions. Themes are often developed years in advance and refined annually as conditions change.

The board sets the compass, the staff draws the map, and participants fill in the terrain.


Appendix C: Where Is the Founder Now?

After leading the organization for more than five decades, Klaus Schwab has stepped back from day-to-day control. He no longer runs operations, sets agendas, or directs programming.

Today, his role is honorary and advisory—that of an institutional elder rather than an executive. Operational leadership rests with a new generation of executives, reflecting the Forum’s attempt to evolve beyond its founder while preserving continuity.


Why the appendices matter

Questions about security, agenda control, and founder influence are often where speculation rushes in to fill silence. Laying out the mechanics doesn’t require defending the Forum—it simply replaces myth with structure.

The World Economic Forum’s influence lies less in who controls it than in who chooses to show up. That remains its defining feature—and its enduring controversy.

Leaving the City Better: Leadership, Limits, and the Question of a Bridge Too Far

A collaboration between Lewis McLain & AI

Leaders inherit messes. They step into offices burdened by deferred maintenance, ignored threats, regulatory capture, and systems quietly bent by special interests. In such a world, passivity does not preserve stability; it preserves neglect. Action becomes the moral baseline, not the exception. The enduring civic question is not whether leaders should push, but how far pushing remains stewardship rather than overreach.

The ancient Greek civic pledge offers a compass: leave the city better than you found it. Public life is stewardship across generations. Authority exists to repair what neglect erodes and to confront what avoidance normalizes. The statesman acts not for comfort, but for continuity—aware that problems ignored do not stay small.

This is where leadership grows hard. Entrenched interests organize precisely because complexity protects them. Manipulation thrives in delay. Incentives reward stasis. Gentle pressure rarely unwinds decades of avoidance. Leaders who push against these forces often look abrasive in real time, not because ego drives them, but because reform disturbs equilibria that were never healthy to begin with.

The phrase “a bridge too far” sharpens this tension. It enters common language through Cornelius Ryan’s account of Operation Market Garden in A Bridge Too Far. The plan is bold and morally urgent—end the war sooner, save lives—but it asks reality to cooperate with optimism. One bridge lies just beyond what logistics, intelligence, and time can support. The failure is not daring; it is miscalculation. The lesson is not “do nothing.” It is “know the load.”

Applied to leadership, the metaphor cuts both ways. Societies stagnate when leaders merely manage decline. Yet institutions exist for reasons that are not always cynical. Some limits preserve legitimacy, trust, and continuity—the invisible infrastructure of a functioning republic. The craft of leadership lies in distinguishing protective limits from self-serving barriers, then pressing the latter without snapping the former.

Seen through this lens, modern leaders often operate in the present tense of pressure. They test boundaries, confront norms, and treat friction as evidence of movement. That posture can be corrective when systems have grown complacent. It can also be hazardous when escalation outruns institutional capacity or public trust. A bridge does not fail the first time it is stressed; it fails after stress becomes routine.

This is where Donald Trump enters the conversation—not as verdict, but as caution. Trump governs with explicit confrontation. He challenges norms openly, personalizes conflict, and compresses long-delayed debates into immediate contests. Supporters see overdue action against captured systems. Critics see erosion of the trust that makes systems work at all. Both readings coexist because the pressure is real and the inheritance is heavy.

The wondering question is not whether such pressure is justified—it often is—but whether its sequencing and tone preserve the very institutions meant to be improved. The post-election period after 2020 brings the metaphor into focus. Legal challenges proceed as allowed; courts rule; states certify. Rhetoric, however, accelerates beyond evidence, and persuasion shades toward insistence. The bridge becomes visible. Not crossed decisively, but clearly approached. The risk is not a single act; it is precedent—teaching future leaders that legitimacy can be strained without immediate collapse.

January 6 stands as a symbolic edge of that bridge. Whatever one concludes about intent, the episode reveals an old truth: rhetoric travels faster than control. When foundational processes are publicly contested, leaders cannot always govern how followers translate suspicion into action. The system endures—but at a cost to shared reality.

None of this denies the core point: leaders given a boatload of neglect are not obligated to be passive. Improvement demands pressure. But the Greek ideal pairs strength with sophrosyne—measured restraint guided by wisdom. The city is left better not by humiliating institutions, but by restoring their purpose; not by replacing trust with loyalty to a person, but by renewing confidence in processes that outlast any one leader.

So what does leadership require in a world of manipulation and special interests?

It requires action, because neglect compounds.
It requires push, because stagnation corrodes.
It requires listening, because limits exist for reasons.
It requires calibration, because strength without proportion becomes its own form of neglect.

A bridge too far is rarely obvious in the moment. It announces itself later—through fragility, cynicism, or precedent. The enduring task of leadership is to cross the bridges that must be crossed, stop short of those that should not, and leave the city—tested, repaired, and steadier—better than it was found.

Artificial Intelligence in City Government: From Adoption to Accountability

A Practical Framework for Innovation, Oversight, and Public Trust

A collaboration between Lewis McLain & AI – A Companion to the previous blog on AI

Artificial intelligence has moved from novelty to necessity in public institutions. What began as experimental tools for drafting documents or summarizing data is now embedded in systems that influence budgeting, service delivery, enforcement prioritization, procurement screening, and public communication. Cities are discovering that AI is no longer optional—but neither is governance.

This essay unifies two truths that are often treated as competing ideas but must now be held together:

  1. AI adoption is inevitable and necessary if cities are to remain operationally effective and fiscally sustainable.
  2. AI oversight is now unavoidable wherever systems influence decisions affecting people, rights, or public trust.

These are not contradictions. They are sequential realities. Adoption without governance leads to chaos. Governance without adoption leads to irrelevance. The task for modern city leadership is to do both—intentionally.

I. The Adoption Imperative: AI as Municipal Infrastructure

Cities face structural pressures that are not temporary: constrained budgets, difficulty recruiting and retaining staff, growing service demands, and rising analytical complexity. AI tools offer a way to expand institutional capacity without expanding payrolls at the same rate.

Common municipal uses already include:

  • Drafting ordinances, reports, and correspondence
  • Summarizing public input and staff analysis
  • Forecasting revenues, expenditures, and service demand
  • Supporting customer service through chat or triage tools
  • Enhancing internal research and analytics

In this sense, AI is not a gadget. It is infrastructure, comparable to ERP systems, GIS, or financial modeling platforms. Cities that delay adoption will find themselves less capable, less competitive, and more expensive to operate.

Adoption, however, is not merely technical. AI reshapes workflows, compresses tasks, and changes how work is performed. Over time, this may alter staffing needs. The question is not whether AI will change city operations—it already is. The question is whether those changes are guided or accidental.

II. The Oversight Imperative: Why Governance Is Now Required

As AI systems move beyond internal productivity and begin to influence decisions—directly or indirectly—oversight becomes essential.

AI systems are now used, or embedded through vendors, in areas such as:

  • Permit or inspection prioritization
  • Eligibility screening for programs or services
  • Vendor risk scoring and procurement screening
  • Enforcement triage
  • Public safety analytics

When AI recommendations shape outcomes, even if a human signs off, accountability cannot be vague. Errors at scale, opaque logic, and undocumented assumptions create legal exposure and erode public trust faster than traditional human error.

Oversight is required because:

  • Scale magnifies mistakes: a single flaw can affect thousands before detection.
  • Opacity undermines legitimacy: residents are less forgiving of decisions they cannot understand.
  • Legal scrutiny is increasing: courts and legislatures are paying closer attention to algorithmic decision-making.

Oversight is not about banning AI. It is about ensuring AI is used responsibly, transparently, and under human control.

III. Bridging Adoption and Oversight: A Two-Speed Framework

The tension between “move fast” and “govern carefully” dissolves once AI uses are separated by risk.

Low-Risk, Internal AI Uses

Examples include drafting, summarization, forecasting, research, and internal analytics.

Approach:
Adopt quickly, document lightly, train staff, and monitor outcomes.

Decision-Adjacent or High-Risk AI Uses

Examples include enforcement prioritization, eligibility determinations, public safety analytics, and procurement screening affecting vendors.

Approach:
Require review, documentation, transparency, and meaningful human oversight before deployment.

This two-speed framework allows cities to capture productivity benefits immediately while placing guardrails only where risk to rights, equity, or trust is real.

IV. Texas Context: Statewide Direction on AI Governance

The Texas Legislature reinforced this balanced approach through the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026. The law does not prohibit AI use. Instead, it establishes expectations for transparency, accountability, and prohibited practices—particularly for government entities.

Key elements include:

  • Disclosure when residents interact with AI systems
  • Prohibitions on social scoring by government
  • Restrictions on discriminatory AI use
  • Guardrails around biometric and surveillance applications
  • Civil penalties for unlawful or deceptive deployment
  • Creation of a statewide Artificial Intelligence Council

The message is clear: Texas expects governments to adopt AI responsibly—neither recklessly nor fearfully.

V. Implications for Cities and Transit Agencies

Cities are already using AI, often unknowingly, through vendor-provided software. Transit agencies face elevated exposure because they combine finance, enforcement, surveillance, and public safety.

The greatest risk is not AI itself, but uncontrolled AI:

  • Vendor-embedded algorithms without disclosure
  • No documented human accountability
  • No audit trail
  • No process for suspension or correction

Cities that act early reduce legal risk, preserve public trust, and maintain operational flexibility.

VI. Workforce Implications: Accurate and Defensible Language

AI will change how work is done over time. It would be inaccurate and irresponsible to claim otherwise.

At the same time, AI does not mandate immediate workforce reductions. In public institutions, workforce impacts—if they occur—are most likely to happen gradually through:

  • Attrition
  • Reassignment
  • Retraining
  • Role redesign

Final staffing decisions remain with City leadership and City Council. AI is a tool for improving capacity and sustainability, not an automatic trigger for reductions.

Conclusion: Coherent, Accountable AI

AI adoption without governance invites chaos. Governance without adoption invites stagnation. Cities that succeed will do both—moving quickly where risk is low and governing carefully where risk is high.

This is not about technology hype. It is about institutional competence in a digital age.


Appendix 1 — Texas Responsible Artificial Intelligence Governance Act (HB 149)

Legislature Online

                                                   H.B. No. 149

AN ACT

relating to regulation of the use of artificial intelligence systems in this state; providing civil penalties.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:

SECTION 1.  This Act may be cited as the Texas Responsible Artificial Intelligence Governance Act.

SECTION 2.  Section 503.001, Business & Commerce Code, is amended by amending Subsections (a) and (e) and adding Subsections (b-1) and (f) to read as follows:

(a)  In this section:

(1)  “Artificial intelligence system” has the meaning assigned by Section 551.001.

(2)  “Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.

(b-1)  For purposes of Subsection (b), an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate.

(e)  This section does not apply to:

(1)  voiceprint data retained by a financial institution or an affiliate of a financial institution, as those terms are defined by 15 U.S.C. Section 6809;

(2)  the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; or

(3)  the development or deployment of an artificial intelligence model or system for the purposes of:

(A)  preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;

(B)  preserving the integrity or security of a system; or

(C)  investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.

(f)  If a biometric identifier captured for the purpose of training an artificial intelligence system is subsequently used for a commercial purpose not described by Subsection (e), the person possessing the biometric identifier is subject to:

(1)  this section’s provisions for the possession and destruction of a biometric identifier; and

(2)  the penalties associated with a violation of this section.

SECTION 3.  Section 541.104(a), Business & Commerce Code, is amended to read as follows:

(a)  A processor shall adhere to the instructions of a controller and shall assist the controller in meeting or complying with the controller’s duties or requirements under this chapter, including:

(1)  assisting the controller in responding to consumer rights requests submitted under Section 541.051 by using appropriate technical and organizational measures, as reasonably practicable, taking into account the nature of processing and the information available to the processor;

(2)  assisting the controller with regard to complying with requirements relating to the security of processing personal data, and if applicable, the personal data collected, stored, and processed by an artificial intelligence system, as that term is defined by Section 551.001, and to the notification of a breach of security of the processor’s system under Chapter 521, taking into account the nature of processing and the information available to the processor; and

(3)  providing necessary information to enable the controller to conduct and document data protection assessments under Section 541.105.

SECTION 4.  Title 11, Business & Commerce Code, is amended by adding Subtitle D to read as follows:

SUBTITLE D.  ARTIFICIAL INTELLIGENCE PROTECTION

CHAPTER 551.  GENERAL PROVISIONS

Sec. 551.001.  DEFINITIONS.  In this subtitle:

(1)  “Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

(2)  “Consumer” means an individual who is a resident of this state acting only in an individual or household context.  The term does not include an individual acting in a commercial or employment context.

(3)  “Council” means the Texas Artificial Intelligence Council established under Chapter 554.

Sec. 551.002.  APPLICABILITY OF SUBTITLE.  This subtitle applies only to a person who:

(1)  promotes, advertises, or conducts business in this state;

(2)  produces a product or service used by residents of this state; or

(3)  develops or deploys an artificial intelligence system in this state.

Sec. 551.003.  CONSTRUCTION AND APPLICATION OF SUBTITLE.  This subtitle shall be broadly construed and applied to promote its underlying purposes, which are to:

(1)  facilitate and advance the responsible development and use of artificial intelligence systems;

(2)  protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems;

(3)  provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and

(4)  provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.

CHAPTER 552.  ARTIFICIAL INTELLIGENCE PROTECTION

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 552.001.  DEFINITIONS.  In this chapter:

(1)  “Deployer” means a person who deploys an artificial intelligence system for use in this state.

(2)  “Developer” means a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state.

(3)  “Governmental entity” means any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state.  The term does not include:

(A)  a hospital district created under the Health and Safety Code or Article IX, Texas Constitution; or

(B)  an institution of higher education, as defined by Section 61.003, Education Code, including any university system or any component institution of the system.

Sec. 552.002.  CONSTRUCTION OF CHAPTER.  This chapter may not be construed to:

(1)  impose a requirement on a person that adversely affects the rights or freedoms of any person, including the right of free speech; or

(2)  authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance.

Sec. 552.003.  LOCAL PREEMPTION.  This chapter supersedes and preempts any ordinance, resolution, rule, or other regulation adopted by a political subdivision regarding the use of artificial intelligence systems.

SUBCHAPTER B. DUTIES AND PROHIBITIONS ON USE OF ARTIFICIAL INTELLIGENCE

Sec. 552.051.  DISCLOSURE TO CONSUMERS.  (a)  In this section, “health care services” means services related to human health or to the diagnosis, prevention, or treatment of a human disease or impairment provided by an individual licensed, registered, or certified under applicable state or federal law to provide those services.

(b)  A governmental agency that makes available an artificial intelligence system intended to interact with consumers shall disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system.

(c)  A person is required to make the disclosure under Subsection (b) regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.

(d)  A disclosure under Subsection (b):

(1)  must be clear and conspicuous;

(2)  must be written in plain language; and

(3)  may not use a dark pattern, as that term is defined by Section 541.001.

(e)  A disclosure under Subsection (b) may be provided by using a hyperlink to direct a consumer to a separate Internet web page.

(f)  If an artificial intelligence system is used in relation to health care service or treatment, the provider of the service or treatment shall provide the disclosure under Subsection (b) to the recipient of the service or treatment or the recipient’s personal representative not later than the date the service or treatment is first provided, except in the case of emergency, in which case the provider shall provide the required disclosure as soon as reasonably possible.

Sec. 552.052.  MANIPULATION OF HUMAN BEHAVIOR.  A person may not develop or deploy an artificial intelligence system in a manner that intentionally aims to incite or encourage a person to:

(1)  commit physical self-harm, including suicide;

(2)  harm another person; or

(3)  engage in criminal activity.

Sec. 552.053.  SOCIAL SCORING.  A governmental entity may not use or deploy an artificial intelligence system that evaluates or classifies a natural person or group of natural persons based on social behavior or personal characteristics, whether known, inferred, or predicted, with the intent to calculate or assign a social score or similar categorical estimation or valuation of the person or group of persons that results or may result in:

(1)  detrimental or unfavorable treatment of a person or group of persons in a social context unrelated to the context in which the behavior or characteristics were observed or noted;

(2)  detrimental or unfavorable treatment of a person or group of persons that is unjustified or disproportionate to the nature or gravity of the observed or noted behavior or characteristics; or

(3)  the infringement of any right guaranteed under the United States Constitution, the Texas Constitution, or state or federal law.

Sec. 552.054.  CAPTURE OF BIOMETRIC DATA.  (a)  In this section, “biometric data” means data generated by automatic measurements of an individual’s biological characteristics.  The term includes a fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual.  The term does not include a physical or digital photograph or data generated from a physical or digital photograph, a video or audio recording or data generated from a video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations under the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. Section 1320d et seq.).

(b)  A governmental entity may not develop or deploy an artificial intelligence system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law.

(c)  A violation of Section 503.001 is a violation of this section.

Sec. 552.055.  CONSTITUTIONAL PROTECTION.  (a)  A person may not develop or deploy an artificial intelligence system with the sole intent for the artificial intelligence system to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution.

(b)  This section is remedial in purpose and may not be construed to create or expand any right guaranteed by the United States Constitution.

Sec. 552.056.  UNLAWFUL DISCRIMINATION.  (a)  In this section:

(1)  “Financial institution” has the meaning assigned by Section 201.101, Finance Code.

(2)  “Insurance entity” means:

(A)  an entity described by Section 82.002(a), Insurance Code;

(B)  a fraternal benefit society regulated under Chapter 885, Insurance Code; or

(C)  the developer of an artificial intelligence system used by an entity described by Paragraph (A) or (B).

(3)  “Protected class” means a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.

(b)  A person may not develop or deploy an artificial intelligence system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.

(c)  For purposes of this section, a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.

(d)  This section does not apply to an insurance entity for purposes of providing insurance services if the entity is subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance.

(e)  A federally insured financial institution is considered to be in compliance with this section if the institution complies with all federal and state banking laws and regulations.

Sec. 552.057.  CERTAIN SEXUALLY EXPLICIT CONTENT AND CHILD PORNOGRAPHY.  A person may not:

(1)  develop or distribute an artificial intelligence system with the sole intent of producing, assisting or aiding in producing, or distributing:

(A)  visual material in violation of Section 43.26, Penal Code; or

(B)  deep fake videos or images in violation of Section 21.165, Penal Code; or

(2)  intentionally develop or distribute an artificial intelligence system that engages in text-based conversations that simulate or describe sexual conduct, as that term is defined by Section 43.25, Penal Code, while impersonating or imitating a child younger than 18 years of age.

SUBCHAPTER C.  ENFORCEMENT

Sec. 552.101.  ENFORCEMENT AUTHORITY.  (a)  The attorney general has exclusive authority to enforce this chapter, except to the extent provided by Section 552.106.

(b)  This chapter does not provide a basis for, and is not subject to, a private right of action for a violation of this chapter or any other law.

Sec. 552.102.  INFORMATION AND COMPLAINTS.  The attorney general shall create and maintain an online mechanism on the attorney general’s Internet website through which a consumer may submit a complaint under this chapter to the attorney general.

Sec. 552.103.  INVESTIGATIVE AUTHORITY.  (a)  If the attorney general receives a complaint through the online mechanism under Section 552.102 alleging a violation of this chapter, the attorney general may issue a civil investigative demand to determine if a violation has occurred.  The attorney general shall issue demands in accordance with and under the procedures established under Section 15.10.

(b)  The attorney general may request from the person reported through the online mechanism, pursuant to a civil investigative demand issued under Subsection (a):

(1)  a high-level description of the purpose, intended use, deployment context, and associated benefits of the artificial intelligence system with which the person is affiliated;

(2)  a description of the type of data used to program or train the artificial intelligence system;

(3)  a high-level description of the categories of data processed as inputs for the artificial intelligence system;

(4)  a high-level description of the outputs produced by the artificial intelligence system;

(5)  any metrics the person uses to evaluate the performance of the artificial intelligence system;

(6)  any known limitations of the artificial intelligence system;

(7)  a high-level description of the post-deployment monitoring and user safeguards the person uses for the artificial intelligence system, including, if the person is a deployer, the oversight, use, and learning process established by the person to address issues arising from the system’s deployment; or

(8)  any other relevant documentation reasonably necessary for the attorney general to conduct an investigation under this section.

Sec. 552.104.  NOTICE OF VIOLATION; OPPORTUNITY TO CURE.  (a)  If the attorney general determines that a person has violated or is violating this chapter, the attorney general shall notify the person in writing of the determination, identifying the specific provisions of this chapter the attorney general alleges have been or are being violated.

(b)  The attorney general may not bring an action against the person:

(1)  before the 60th day after the date the attorney general provides the notice under Subsection (a); or

(2)  if, before the 60th day after the date the attorney general provides the notice under Subsection (a), the person:

(A)  cures the identified violation; and

(B)  provides the attorney general with a written statement that the person has:

(i)  cured the alleged violation;

(ii)  provided supporting documentation to show the manner in which the person cured the violation; and

(iii)  made any necessary changes to internal policies to reasonably prevent further violation of this chapter.

Sec. 552.105.  CIVIL PENALTY; INJUNCTION.  (a)  A person who violates this chapter and does not cure the violation under Section 552.104 is liable to this state for a civil penalty in an amount of:

(1)  for each violation the court determines to be curable or a breach of a statement submitted to the attorney general under Section 552.104(b)(2), not less than $10,000 and not more than $12,000;

(2)  for each violation the court determines to be uncurable, not less than $80,000 and not more than $200,000; and

(3)  for a continued violation, not less than $2,000 and not more than $40,000 for each day the violation continues.

(b)  The attorney general may bring an action in the name of this state to:

(1)  collect a civil penalty under this section;

(2)  seek injunctive relief against further violation of this chapter; and

(3)  recover attorney’s fees and reasonable court costs or other investigative expenses.

(c)  There is a rebuttable presumption that a person used reasonable care as required under this chapter.

(d)  A defendant in an action under this section may seek an expedited hearing or other process, including a request for declaratory judgment, if the person believes in good faith that the person has not violated this chapter.

(e)  A defendant in an action under this section may not be found liable if:

(1)  another person uses the artificial intelligence system affiliated with the defendant in a manner prohibited by this chapter; or

(2)  the defendant discovers a violation of this chapter through:

(A)  feedback from a developer, deployer, or other person who believes a violation has occurred;

(B)  testing, including adversarial testing or red-team testing;

(C)  following guidelines set by applicable state agencies; or

(D)  if the defendant substantially complies with the most recent version of the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” published by the National Institute of Standards and Technology or another nationally or internationally recognized risk management framework for artificial intelligence systems, an internal review process.

(f)  The attorney general may not bring an action to collect a civil penalty under this section against a person for an artificial intelligence system that has not been deployed.

Sec. 552.106.  ENFORCEMENT ACTIONS BY STATE AGENCIES.  (a)  A state agency may impose sanctions against a person licensed, registered, or certified by that agency for a violation of Subchapter B if:

(1)  the person has been found in violation of this chapter under Section 552.105; and

(2)  the attorney general has recommended additional enforcement by the applicable agency.

(b)  Sanctions under this section may include:

(1)  suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and

(2)  a monetary penalty not to exceed $100,000.

CHAPTER 553.  ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 553.001.  DEFINITIONS.  In this chapter:

(1)  “Applicable agency” means a department of this state established by law to regulate certain types of business activity in this state and the people engaging in that business, including the issuance of licenses and registrations, that the department determines would regulate a program participant if the person were not operating under this chapter.

(2)  “Department” means the Texas Department of Information Resources.

(3)  “Program” means the regulatory sandbox program established under this chapter that allows a person, without being licensed or registered under the laws of this state, to test an artificial intelligence system for a limited time and on a limited basis.

(4)  “Program participant” means a person whose application to participate in the program is approved and who may test an artificial intelligence system under this chapter.

SUBCHAPTER B.  SANDBOX PROGRAM FRAMEWORK

Sec. 553.051.  ESTABLISHMENT OF SANDBOX PROGRAM.  (a)  The department, in consultation with the council, shall create a regulatory sandbox program that enables a person to obtain legal protection and limited access to the market in this state to test innovative artificial intelligence systems without obtaining a license, registration, or other regulatory authorization.

(b)  The program is designed to:

(1)  promote the safe and innovative use of artificial intelligence systems across various sectors including healthcare, finance, education, and public services;

(2)  encourage responsible deployment of artificial intelligence systems while balancing the need for consumer protection, privacy, and public safety;

(3)  provide clear guidelines for a person who develops an artificial intelligence system to test systems while certain laws and regulations related to the testing are waived or suspended; and

(4)  allow a person to engage in research, training, testing, or other pre-deployment activities to develop an artificial intelligence system.

(c)  The attorney general may not file or pursue charges against a program participant for violation of a law or regulation waived under this chapter that occurs during the testing period.

(d)  A state agency may not file or pursue punitive action against a program participant, including the imposition of a fine or the suspension or revocation of a license, registration, or other authorization, for violation of a law or regulation waived under this chapter that occurs during the testing period.

(e)  Notwithstanding Subsections (c) and (d), the requirements of Subchapter B, Chapter 552, may not be waived, and the attorney general or a state agency may file or pursue charges or action against a program participant who violates that subchapter.

Sec. 553.052.  APPLICATION FOR PROGRAM PARTICIPATION.  (a)  A person must obtain approval from the department and any applicable agency before testing an artificial intelligence system under the program.

(b)  The department by rule shall prescribe the application form.  The form must require the applicant to:

(1)  provide a detailed description of the artificial intelligence system the applicant desires to test in the program, and its intended use;

(2)  include a benefit assessment that addresses potential impacts on consumers, privacy, and public safety;

(3)  describe the applicant’s plan for mitigating any adverse consequences that may occur during the test; and

(4)  provide proof of compliance with any applicable federal artificial intelligence laws and regulations.

Sec. 553.053.  DURATION AND SCOPE OF PARTICIPATION.  (a)  A program participant approved by the department and each applicable agency may test and deploy an artificial intelligence system under the program for a period of not more than 36 months.

(b)  The department may extend a test under this chapter if the department finds good cause for the test to continue.

Sec. 553.054.  EFFICIENT USE OF RESOURCES.  The department shall coordinate the activities under this subchapter and any other law relating to artificial intelligence systems to ensure efficient system implementation and to streamline the use of department resources, including information sharing and personnel.

SUBCHAPTER C.  OVERSIGHT AND COMPLIANCE

Sec. 553.101.  COORDINATION WITH APPLICABLE AGENCY.  (a)  The department shall coordinate with all applicable agencies to oversee the operation of a program participant.

(b)  The council or an applicable agency may recommend to the department that a program participant be removed from the program if the council or applicable agency finds that the program participant’s artificial intelligence system:

(1)  poses an undue risk to public safety or welfare;

(2)  violates any federal law or regulation; or

(3)  violates any state law or regulation not waived under the program.

Sec. 553.102.  PERIODIC REPORT BY PROGRAM PARTICIPANT.  (a)  A program participant shall provide a quarterly report to the department.

(b)  The report shall include:

(1)  metrics for the artificial intelligence system’s performance;

(2)  updates on how the artificial intelligence system mitigates any risks associated with its operation; and

(3)  feedback from consumers and affected stakeholders that are using an artificial intelligence system tested under this chapter.

(c)  The department shall maintain confidentiality regarding the intellectual property, trade secrets, and other sensitive information it obtains through the program.

Sec. 553.103.  ANNUAL REPORT BY DEPARTMENT.  (a)  The department shall submit an annual report to the legislature.

(b)  The report shall include:

(1)  the number of program participants testing an artificial intelligence system in the program;

(2)  the overall performance and impact of artificial intelligence systems tested in the program; and

(3)  recommendations on changes to laws or regulations for future legislative consideration.

CHAPTER 554.  TEXAS ARTIFICIAL INTELLIGENCE COUNCIL

SUBCHAPTER A.  CREATION AND ORGANIZATION OF COUNCIL

Sec. 554.001.  CREATION OF COUNCIL.  (a)  The Texas Artificial Intelligence Council is created to:

(1)  ensure artificial intelligence systems in this state are ethical and developed in the public’s best interest;

(2)  ensure artificial intelligence systems in this state do not harm public safety or undermine individual freedoms by finding issues and making recommendations to the legislature regarding the Penal Code and Chapter 82, Civil Practice and Remedies Code;

(3)  identify existing laws and regulations that impede innovation in the development of artificial intelligence systems and recommend appropriate reforms;

(4)  analyze opportunities to improve the efficiency and effectiveness of state government operations through the use of artificial intelligence systems;

(5)  make recommendations to applicable state agencies regarding the use of artificial intelligence systems to improve the agencies’ efficiency and effectiveness;

(6)  evaluate potential instances of regulatory capture, including undue influence by technology companies or disproportionate burdens on smaller innovators caused by the use of artificial intelligence systems;

(7)  evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users through the use of artificial intelligence systems;

(8)  offer guidance and recommendations to the legislature on the ethical and legal use of artificial intelligence systems;

(9)  conduct and publish the results of a study on the current regulatory environment for artificial intelligence systems;

(10)  receive reports from the Department of Information Resources regarding the regulatory sandbox program under Chapter 553; and

(11)  make recommendations for improvements to the regulatory sandbox program under Chapter 553.

(b)  The council is administratively attached to the Department of Information Resources, and the department shall provide administrative support to the council as provided by this section.

(c)  The Department of Information Resources and the council shall enter into a memorandum of understanding detailing:

(1)  the administrative support the council requires from the department to fulfill the council’s purposes;

(2)  the reimbursement of administrative expenses to the department; and

(3)  any other provisions necessary to ensure the efficient operation of the council.

Sec. 554.002.  COUNCIL MEMBERSHIP.  (a)  The council is composed of seven members as follows:

(1)  three members of the public appointed by the governor;

(2)  two members of the public appointed by the lieutenant governor; and

(3)  two members of the public appointed by the speaker of the house of representatives.

(b)  Members of the council serve staggered four-year terms, with the terms of three or four members expiring every two years.

(c)  The governor shall appoint a chair from among the members, and the council shall elect a vice chair from its membership.

(d)  The council may establish an advisory board composed of individuals from the public who possess expertise directly related to the council’s functions, including technical, ethical, regulatory, and other relevant areas.

Sec. 554.003.  QUALIFICATIONS.  Members of the council must be Texas residents and have knowledge or expertise in one or more of the following areas:

(1)  artificial intelligence systems;

(2)  data privacy and security;

(3)  ethics in technology or law;

(4)  public policy and regulation;

(5)  risk management related to artificial intelligence systems;

(6)  improving the efficiency and effectiveness of governmental operations; or

(7)  anticompetitive practices and market fairness.

Sec. 554.004.  STAFF AND ADMINISTRATION.  The council may hire an executive director and other personnel as necessary to perform its duties.

SUBCHAPTER B.  POWERS AND DUTIES OF COUNCIL

Sec. 554.101.  ISSUANCE OF REPORTS.  (a)  The council may issue reports to the legislature regarding the use of artificial intelligence systems in this state.

(b)  The council may issue reports on:

(1)  the compliance of artificial intelligence systems in this state with the laws of this state;

(2)  the ethical implications of deploying artificial intelligence systems in this state;

(3)  data privacy and security concerns related to artificial intelligence systems in this state; or

(4)  potential liability or legal risks associated with the use of artificial intelligence systems in this state.

Sec. 554.102.  TRAINING AND EDUCATIONAL OUTREACH.  The council shall conduct training programs for state agencies and local governments on the use of artificial intelligence systems.

Sec. 554.103.  LIMITATION OF AUTHORITY.  The council may not:

(1)  adopt rules or promulgate guidance that is binding for any entity;

(2)  interfere with or override the operation of a state agency; or

(3)  perform a duty or exercise a power not granted by this chapter.

SECTION 5.  Section 325.011, Government Code, is amended to read as follows:

Sec. 325.011.  CRITERIA FOR REVIEW.  The commission and its staff shall consider the following criteria in determining whether a public need exists for the continuation of a state agency or its advisory committees or for the performance of the functions of the agency or its advisory committees:

(1)  the efficiency and effectiveness with which the agency or the advisory committee operates;

(2)(A)  an identification of the mission, goals, and objectives intended for the agency or advisory committee and of the problem or need that the agency or advisory committee was intended to address; and

(B)  the extent to which the mission, goals, and objectives have been achieved and the problem or need has been addressed;

(3)(A)  an identification of any activities of the agency in addition to those granted by statute and of the authority for those activities; and

(B)  the extent to which those activities are needed;

(4)  an assessment of authority of the agency relating to fees, inspections, enforcement, and penalties;

(5)  whether less restrictive or alternative methods of performing any function that the agency performs could adequately protect or provide service to the public;

(6)  the extent to which the jurisdiction of the agency and the programs administered by the agency overlap or duplicate those of other agencies, the extent to which the agency coordinates with those agencies, and the extent to which the programs administered by the agency can be consolidated with the programs of other state agencies;

(7)  the promptness and effectiveness with which the agency addresses complaints concerning entities or other persons affected by the agency, including an assessment of the agency’s administrative hearings process;

(8)  an assessment of the agency’s rulemaking process and the extent to which the agency has encouraged participation by the public in making its rules and decisions and the extent to which the public participation has resulted in rules that benefit the public;

(9)  the extent to which the agency has complied with:

(A)  federal and state laws and applicable rules regarding equality of employment opportunity and the rights and privacy of individuals; and

(B)  state law and applicable rules of any state agency regarding purchasing guidelines and programs for historically underutilized businesses;

(10)  the extent to which the agency issues and enforces rules relating to potential conflicts of interest of its employees;

(11)  the extent to which the agency complies with Chapters 551 and 552 and follows records management practices that enable the agency to respond efficiently to requests for public information;

(12)  the effect of federal intervention or loss of federal funds if the agency is abolished;

(13)  the extent to which the purpose and effectiveness of reporting requirements imposed on the agency justifies the continuation of the requirement; [and]

(14)  an assessment of the agency’s cybersecurity practices using confidential information available from the Department of Information Resources or any other appropriate state agency; and

(15)  an assessment of the agency’s use of artificial intelligence systems, as that term is defined by Section 551.001, Business & Commerce Code, in its operations and its oversight of the use of artificial intelligence systems by persons under the agency’s jurisdiction, and any related impact on the agency’s ability to achieve its mission, goals, and objectives, made using information available from the Department of Information Resources, the attorney general, or any other appropriate state agency.

SECTION 6.  Section 2054.068(b), Government Code, is amended to read as follows:

(b)  The department shall collect from each state agency information on the status and condition of the agency’s information technology infrastructure, including information regarding:

(1)  the agency’s information security program;

(2)  an inventory of the agency’s servers, mainframes, cloud services, and other information technology equipment;

(3)  identification of vendors that operate and manage the agency’s information technology infrastructure; [and]

(4)  any additional related information requested by the department; and

(5)  an evaluation of the use or considered use of artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, by each state agency.

SECTION 7.  Section 2054.0965(b), Government Code, is amended to read as follows:

(b)  Except as otherwise modified by rules adopted by the department, the review must include:

(1)  an inventory of the agency’s major information systems, as defined by Section 2054.008, and other operational or logistical components related to deployment of information resources as prescribed by the department;

(2)  an inventory of the agency’s major databases, artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, and applications;

(3)  a description of the agency’s existing and planned telecommunications network configuration;

(4)  an analysis of how information systems, components, databases, applications, and other information resources have been deployed by the agency in support of:

(A)  applicable achievement goals established under Section 2056.006 and the state strategic plan adopted under Section 2056.009;

(B)  the state strategic plan for information resources; and

(C)  the agency’s business objectives, mission, and goals;

(5)  agency information necessary to support the state goals for interoperability and reuse; and

(6)  confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources.

SECTION 8.  Not later than September 1, 2026, the attorney general shall post on the attorney general’s Internet website the information and online mechanism required by Section 552.102, Business & Commerce Code, as added by this Act.

SECTION 9.  (a)  Notwithstanding any other section of this Act, in a state fiscal year, a state agency to which this Act applies is not required to implement a provision found in another section of this Act that is drafted as a mandatory provision imposing a duty on the agency to take an action unless money is specifically appropriated to the agency for that fiscal year to carry out that duty.  The agency may implement the provision in that fiscal year to the extent other funding is available to the agency to do so.

(b)  If, as authorized by Subsection (a) of this section, the state agency does not implement the mandatory provision in a state fiscal year, the state agency, in its legislative budget request for the next state fiscal biennium, shall certify that fact to the Legislative Budget Board and include a written estimate of the costs of implementing the provision in each year of that next state fiscal biennium.

SECTION 10.  This Act takes effect January 1, 2026.

    President of the Senate           Speaker of the House      

I certify that H.B. No. 149 was passed by the House on April 23, 2025, by the following vote:  Yeas 146, Nays 3, 1 present, not voting; and that the House concurred in Senate amendments to H.B. No. 149 on May 30, 2025, by the following vote:  Yeas 121, Nays 17, 2 present, not voting.

______________________________

Chief Clerk of the House   

I certify that H.B. No. 149 was passed by the Senate, with amendments, on May 23, 2025, by the following vote:  Yeas 31, Nays 0.

______________________________

Secretary of the Senate   

APPROVED: __________________

                 Date       

          __________________

               Governor       


Appendix 2 — Model Ordinance: Responsible Use of Artificial Intelligence in City Operations

ORDINANCE NO. ______

AN ORDINANCE

relating to the responsible use of artificial intelligence systems by the City; establishing transparency, accountability, and oversight requirements; and providing for implementation and administration.

WHEREAS,

the City recognizes that artificial intelligence (“AI”) systems are increasingly used to improve operational efficiency, service delivery, data analysis, and internal workflows; and

WHEREAS,

the City further recognizes that certain uses of AI may influence decisions affecting residents, employees, vendors, or regulated parties and therefore require appropriate oversight; and

WHEREAS,

the City seeks to encourage responsible innovation while preserving public trust, transparency, and accountability; and

WHEREAS,

the Texas Legislature has enacted the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026, establishing statewide standards for AI use by government entities; and

WHEREAS,

the City recognizes that the adoption of artificial intelligence tools may, over time, change how work is performed and how staffing needs are structured, and that any such impacts are expected to occur gradually through attrition, reassignment, or role redesign rather than immediate workforce reductions;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF __________, TEXAS:

Section 1. Definitions

For purposes of this Ordinance:

  1. “Artificial Intelligence System” means a computational system that uses machine learning, statistical modeling, or related techniques to perform tasks normally associated with human intelligence, including analysis, prediction, classification, content generation, or prioritization.
  2. “Decision-Adjacent AI” means an AI system that materially influences, prioritizes, or recommends outcomes related to enforcement, eligibility, allocation of resources, personnel actions, procurement decisions, or public services, even if final decisions are made by a human.
  3. “High-Risk AI Use” means deployment of an AI system that directly or indirectly affects individual rights, access to services, enforcement actions, or legally protected interests.
  4. “Department” means any City department, office, division, or agency.

Section 2. Permitted Use of Artificial Intelligence

(a) Internal Productivity Uses. Departments may deploy AI systems for internal productivity and analytical purposes, including but not limited to:

  • Drafting and summarization of documents
  • Data analysis and forecasting
  • Workflow automation
  • Research and internal reporting
  • Customer-service chat tools providing general information (with disclaimers as appropriate)

Such uses shall not require prior Council approval but shall be subject to internal documentation requirements.

(b) Decision-Adjacent Uses. AI systems that influence or support decisions affecting residents, employees, vendors, or regulated entities may be deployed only in accordance with Sections 3 and 4 of this Ordinance.

Section 3. Prohibited Uses

No Department shall deploy or use an AI system that:

  1. Performs social scoring of individuals or groups based on behavior, personal traits, or reputation for the purpose of denying services, benefits, or rights;
  2. Intentionally discriminates against a protected class in violation of state or federal law;
  3. Generates or deploys biometric identification or surveillance in violation of constitutional protections;
  4. Produces or facilitates unlawful deep-fake or deceptive content;
  5. Operates as a fully automated decision-making system without meaningful human review in matters affecting legal rights or obligations.

Section 4. Oversight and Approval for High-Risk AI Uses

(a) Inventory Requirement. The City Manager shall maintain a centralized AI Systems Inventory identifying:

  • Each AI system in use
  • The Department deploying the system
  • The system’s purpose
  • Whether the use is classified as high-risk

(b) Approval Process. Prior to deployment of any High-Risk AI Use, the Department must:

  1. Submit a written justification describing the system’s purpose and scope;
  2. Identify the data sources used by the system;
  3. Describe human oversight mechanisms;
  4. Obtain approval from:
    • The City Manager (or designee), and
    • The City Attorney for legal compliance review.

(c) Human Accountability. Each AI system shall have a designated human owner responsible for:

  • Monitoring performance
  • Responding to errors or complaints
  • Suspending use if risks are identified

Section 5. Transparency and Public Disclosure

(a) Disclosure to the Public. When a City AI system interacts directly with residents, the City shall provide clear notice that the interaction involves AI.

(b) Public Reporting. The City shall publish annually:

  • A summary of AI systems in use
  • The general purposes of high-risk AI systems
  • Contact information for public inquiries

No proprietary or security-sensitive information shall be disclosed.

Section 6. Procurement and Vendor Requirements

All City contracts involving AI systems shall, where applicable:

  1. Require disclosure of AI functions;
  2. Prohibit undisclosed algorithmic decision-making;
  3. Allow the City to audit or review AI system outputs relevant to City operations;
  4. Require vendors to notify the City of material changes to AI functionality.

Section 7. Review and Sunset

(a) Periodic Review. High-risk AI systems shall be reviewed at least annually to assess:

  • Accuracy
  • Bias
  • Continued necessity
  • Compliance with this Ordinance

(b) Sunset Authority. The City Manager may suspend or terminate use of any AI system that poses unacceptable risk or fails compliance review.

Section 8. Training

The City shall provide appropriate training to employees involved in:

  • Deploying AI systems
  • Supervising AI-assisted workflows
  • Interpreting AI-generated outputs

Section 9. Severability

If any provision of this Ordinance is held invalid, such invalidity shall not affect the remaining provisions.

Section 10. Effective Date

This Ordinance shall take effect immediately upon adoption.


Appendix 3 — City Manager Administrative Regulation: Responsible Use of Artificial Intelligence

ADMINISTRATIVE REGULATION NO. ___

Subject: Responsible Use of Artificial Intelligence (AI) in City Operations
Authority: Ordinance No. ___ (Responsible Use of Artificial Intelligence)
Issued by: City Manager
Effective Date: __________

1. Purpose

This Administrative Regulation establishes operational procedures for the responsible deployment, oversight, and monitoring of artificial intelligence (AI) systems used by the City, consistent with adopted Council policy and applicable state law.

The intent is to:

  • Enable rapid adoption of AI for productivity and service delivery;
  • Ensure transparency and accountability for higher-risk uses; and
  • Protect the City, employees, and residents from unintended consequences.

2. Scope

This regulation applies to all City departments, offices, and divisions that:

  • Develop, procure, deploy, or use AI systems; or
  • Rely on vendor-provided software that includes AI functionality.

3. AI System Classification

Departments shall classify AI systems into one of the following categories:

A. Tier 1 — Internal Productivity AI

Examples:

  • Document drafting and summarization
  • Data analysis and forecasting
  • Internal research and reporting
  • Workflow automation

Oversight Level:

  • Department-level approval
  • Registration in AI Inventory

B. Tier 2 — Decision-Adjacent AI

Examples:

  • Permit or inspection prioritization
  • Vendor or application risk scoring
  • Resource allocation recommendations
  • Enforcement or compliance triage

Oversight Level:

  • City Manager approval
  • Legal review
  • Annual performance review

C. Tier 3 — High-Risk AI

Examples:

  • AI influencing enforcement actions
  • Eligibility determinations
  • Public safety analytics
  • Biometric or surveillance tools

Oversight Level:

  • City Manager approval
  • City Attorney review
  • Documented human-in-the-loop controls
  • Annual audit and Council notification

4. AI Systems Inventory

The City Manager’s Office shall maintain a centralized AI Systems Inventory, which includes:

  • System name and vendor
  • Department owner
  • Purpose and classification tier
  • Date of deployment
  • Oversight requirements

Departments shall update the inventory prior to deploying any new AI system.

5. Approval Process

A. Tier 1 Systems

  • Approved by Department Director
  • Registered in inventory

B. Tier 2 and Tier 3 Systems

Departments must submit:

  1. A description of the system and intended use
  2. Data sources and inputs
  3. Description of human oversight
  4. Risk mitigation measures

Approval required from:

  • City Manager (or designee)
  • City Attorney (for legal compliance)

6. Human Oversight & Accountability

Each AI system shall have a designated System Owner responsible for:

  • Monitoring system outputs
  • Responding to errors or complaints
  • Suspending use if risks emerge
  • Coordinating audits or reviews

No AI system may operate as a fully autonomous decision-maker for actions affecting legal rights or obligations.

7. Vendor & Procurement Controls

Procurement involving AI systems shall:

  • Identify AI functionality explicitly in solicitations
  • Require vendors to disclose material AI updates
  • Prohibit undisclosed algorithmic decision-making
  • Preserve City audit and review rights

8. Monitoring, Review & Sunset

  • Tier 2 and Tier 3 systems shall undergo annual review.
  • Systems may be suspended or sunset if:
    • Accuracy degrades
    • Bias is identified
    • Legal risk increases
    • The system no longer serves a defined purpose

9. Training

Departments deploying AI shall ensure appropriate staff training covering:

  • Proper interpretation of AI outputs
  • Limitations of AI systems
  • Escalation and error-handling procedures

10. Reporting to Council

The City Manager shall provide Council with:

  • An annual summary of AI systems in use
  • Identification of Tier 3 (High-Risk) systems
  • Any material incidents or corrective actions

11. Effective Date

This Administrative Regulation is effective immediately upon issuance.

12. Workforce Considerations

The use of artificial intelligence systems may change job functions and workflows over time. Departments shall:

  • Use AI to augment employee capabilities wherever possible;
  • Prioritize retraining, reassignment, and natural attrition when workflows change;
  • Coordinate with Human Resources before deploying AI systems that materially alter job duties; and
  • Recognize that long-term staffing impacts, if any, remain subject to City Manager and City Council authority.

Appendix 4 — Public-Facing FAQ: Responsible Use of Artificial Intelligence in City Operations

What is this ordinance about?

This ordinance establishes clear rules for how the City may use artificial intelligence (AI) tools. It allows the City to use modern technology to improve efficiency and service delivery while ensuring that higher-risk uses are transparent, accountable, and overseen by people.

Is the City already using artificial intelligence?

Yes. Like most modern organizations, the City already uses limited AI-enabled tools for tasks such as document drafting, data analysis, customer service support, and vendor-provided software systems.

This ordinance ensures those tools are used consistently and responsibly.

Is this ordinance banning artificial intelligence?

No.
The ordinance does not ban AI. It encourages responsible adoption of AI for productivity and internal efficiency while placing guardrails on uses that could affect people’s rights or access to services.

Why is the City adopting rules now?

AI tools are becoming more common and more capable. Clear rules help ensure:

  • Transparency in how AI is used
  • Accountability for outcomes
  • Compliance with new Texas law
  • Public trust in City operations

The Texas Legislature recently enacted statewide standards for AI use by government entities, and this ordinance aligns the City with those expectations.

Will artificial intelligence affect City jobs?

AI may change how work is done over time, just as previous technologies have.

This ordinance does not authorize immediate workforce reductions. Any long-term impacts are expected to occur gradually and, where possible, through:

  • Natural attrition
  • Reassignment
  • Retraining
  • Changes in job duties

Final staffing decisions remain with City leadership and City Council.

Will AI replace City employees?

AI tools are intended to assist employees, not replace human judgment. For higher-risk uses, the ordinance requires meaningful human oversight and accountability.

Can AI make decisions about me automatically?

No.
The ordinance prohibits fully automated decision-making that affects legal rights, enforcement actions, or access to services without human review.

AI may provide information or recommendations, but people remain responsible for decisions.

Will the City use AI for surveillance or facial recognition?

The ordinance prohibits AI uses that violate constitutional protections, including improper biometric surveillance.

Any use of biometric or surveillance-related AI would require strict legal review and compliance with state and federal law.

How will I know if I’m interacting with AI?

If the City uses AI systems that interact directly with residents, the City must clearly disclose that you are interacting with an AI system.

Does this apply to police or public safety?

Yes.
AI tools used in public safety contexts are considered higher-risk and require additional review, approval, and oversight. AI systems may not independently make enforcement decisions.

Who is responsible if an AI system makes a mistake?

Each AI system has a designated City employee responsible for monitoring its use, addressing errors, and suspending the system if necessary.

Responsibility remains with the City—not the software.

Will the public be able to see how AI is used?

Yes.
The City will publish an annual summary describing:

  • The types of AI systems in use
  • Their general purpose
  • How residents can ask questions or raise concerns

Sensitive or proprietary information will not be disclosed.

Does this create a new board or bureaucracy?

No.
Oversight is handled through existing City leadership and administrative structures.

Is there a cost to adopting this ordinance?

There is no direct cost associated with adoption. Over time, responsible AI use may help control costs by improving productivity and efficiency.

How often will this policy be reviewed?

Higher-risk AI systems are reviewed annually. The ordinance itself may be updated as technology and law evolve.

Who can I contact with questions or concerns?

Residents may contact the City Manager’s Office or submit inquiries through the City’s website. Information on AI use and reporting channels will be publicly available.

Bottom Line

This ordinance ensures the City:

  • Uses modern tools responsibly
  • Maintains human accountability
  • Protects public trust
  • Aligns with Texas law
  • Adapts thoughtfully to technological change