Davos and the World Economic Forum: A Plain-Spoken Guide for the Curious

A collaboration between Lewis McLain & AI

Every January, headlines begin to murmur about a small Alpine town in Switzerland where presidents, prime ministers, billionaires, activists, and journalists gather in winter coats and sensible boots. The place is Davos. The occasion is the annual meeting of the World Economic Forum.

For many people, what they hear sounds mysterious, elite, or faintly ominous. For others, it sounds like empty talk in a luxury setting. Most people simply want to know: what is this thing, who’s there, and why does it matter?

This essay is written for that middle ground—the reader who knows little, hears a lot, and wants a clearer picture without conspiracy or cheerleading.


What the World Economic Forum actually is

The World Economic Forum is not a world government. It cannot pass laws, levy taxes, deploy troops, or compel nations or companies to do anything. It is an international nonprofit organization based in Geneva whose central purpose is to convene people who rarely sit in the same room: political leaders, business executives, academics, civil-society leaders, technologists, and journalists.

Its core belief is simple: many of the biggest problems of modern life—financial instability, pandemics, climate change, technological disruption—do not respect borders or sectors. Governments alone cannot solve them. Markets alone cannot solve them. NGOs alone cannot solve them. The Forum exists to provide a neutral place where these worlds collide, talk, argue, and sometimes align.

That makes the Forum a platform, not a power. Its influence comes from who attends and what conversations happen—not from any formal authority.


How Davos became Davos

The Forum began modestly in 1971, founded by German economist Klaus Schwab as the European Management Forum. The early meetings focused on helping European companies learn modern management practices. Davos, a quiet mountain town, was chosen deliberately: remote enough to keep people focused, neutral enough to avoid national dominance.

Over time, as globalization accelerated, business problems became political problems, technological problems became ethical problems, and economic decisions began shaping entire societies. The Forum expanded with the world it was trying to understand.

What started with a few hundred executives grew into a global gathering. Today, the annual meeting typically brings about 2,500–3,000 participants from more than 130 countries, including dozens of heads of state and government, hundreds of CEOs, leaders of international organizations, researchers, activists, and several hundred journalists. It is large—but intentionally capped to remain workable rather than sprawling.


What actually happens there

The popular image of Davos is a series of panel discussions filled with polished talking points. Those panels do exist, and they are public-facing for a reason: they help surface ideas and set agendas.

But the real substance happens elsewhere.

Davos is designed for density of interaction. Leaders move between formal sessions, small working groups, bilateral meetings, and unplanned conversations in hallways and cafés. Many of these meetings are private and off the record—not because secrets are being plotted, but because frank conversation is impossible when every sentence becomes a headline.

No binding decisions are made. No treaties are signed. What does happen is relationship-building, early alignment, and problem-definition. In global affairs, those are often the invisible first steps before any formal action occurs later through governments, markets, or institutions.


What the Forum has actually achieved

It’s fair to say the World Economic Forum has not “solved” the world’s problems. Anyone claiming otherwise should be met with raised eyebrows. Its contributions are subtler.

First, the Forum is exceptionally good at agenda-setting. Ideas such as stakeholder capitalism, ESG reporting, global health coordination, and AI governance gained early prominence at Davos before moving into boardrooms and legislatures.

Second, the Forum has served as an incubator for cooperation. It has helped launch or align initiatives in areas like vaccine access, climate finance, and cybersecurity norms by bringing public and private actors together before formal mechanisms existed.

Third, Davos has functioned at times as an informal diplomatic space. Leaders from rival nations have used it to test ideas, reduce misunderstandings, or reopen channels of communication. These moments rarely make headlines, but they matter precisely because they happen before crises harden into policy.

In short, Davos doesn’t produce outcomes the way elections or treaties do. It produces conditions under which outcomes later become possible.


The criticisms—and why they persist

Criticism of Davos is not irrational. It is, by design, an elite gathering. Many participants arrive by private jet to discuss inequality, climate change, or social strain. The optics are unavoidable, and resentment is understandable.

There is also a persistent frustration that Davos produces more talk than action. That criticism confuses a forum with an executive authority—but it still lands emotionally, because people want visible results.

Finally, there is the concern that some voices—particularly from poorer countries or grassroots movements—struggle to compete with corporate and state power. The Forum has tried to broaden participation, but the imbalance remains a legitimate tension.

These critiques don’t mean Davos is useless. They mean it is limited, and that limitation should be understood rather than ignored.


The bottom line

The World Economic Forum is neither a secret government nor an empty spectacle. It is a tool—an imperfect one—for convening global influence in one place and forcing conversations that rarely happen elsewhere.

Davos matters not because it commands the world, but because it reflects it. The same tensions people feel about globalization, inequality, power, and accountability show up there in concentrated form. That makes it an easy target—and also a useful mirror.

In a fragmented age, the experiment of bringing rivals, allies, critics, and skeptics into the same snowy town continues not because it is ideal, but because no better alternative has yet emerged. Davos doesn’t promise solutions. It offers something rarer and more fragile: the possibility that people with power might listen to one another before deciding what to do next.


Appendix A: Security, Protest, and Public Order at Davos

One of the most common questions people ask—often with suspicion—is: How can so many powerful people gather without turning the place into a fortress?

Security at Davos is led almost entirely by Swiss public authorities, not private forces. Swiss federal and cantonal police, local Davos police, and Swiss Army units operate in support roles such as airspace monitoring, logistics, and rapid response. Visiting leaders bring their own close-protection teams, but overall coordination remains Swiss.

The approach is layered and restrained. Davos is a small, geographically isolated town with limited access routes, which allows authorities to manage entry into the town rather than militarize individual buildings. Accreditation controls, police presence, and venue security form concentric rings, while the overall posture emphasizes predictability and calm rather than intimidation.

Protests are not banned. Switzerland strongly protects the right to assembly. Demonstrations are permitted with advance coordination, designated areas, and agreed routes. Police focus on separation and de-escalation, not suppression. As a result, protests at Davos are usually visible, peaceful, and orderly—more expression than confrontation.

Security at Davos works not because it is overwhelming, but because it is boringly competent.


Appendix B: Who Sets the Agenda?

The Forum’s agenda is not improvised, nor dictated by any single government or corporation.

At the top is a Board of Trustees, responsible for mission, long-term direction, and governance. The board does not choose individual panel topics or speakers, but it defines strategic priorities—the big questions the Forum believes the world must confront in the coming years.

Turning those priorities into an annual theme and program is handled by executive leadership, standing expert networks, and ongoing consultation with governments, international organizations, companies, and research institutions. Themes are often developed years in advance and refined annually as conditions change.

The board sets the compass, the staff draws the map, and participants fill in the terrain.


Appendix C: Where Is the Founder Now?

After leading the organization for more than five decades, Klaus Schwab has stepped back from day-to-day control. He no longer runs operations, sets agendas, or directs programming.

Today, his role is honorary and advisory—that of an institutional elder rather than an executive. Operational leadership rests with a new generation of executives, reflecting the Forum’s attempt to evolve beyond its founder while preserving continuity.


Why the appendices matter

Questions about security, agenda control, and founder influence are often where speculation rushes in to fill silence. Laying out the mechanics doesn’t require defending the Forum—it simply replaces myth with structure.

The World Economic Forum’s influence lies less in who controls it than in who chooses to show up. That remains its defining feature—and its enduring controversy.

Leaving the City Better: Leadership, Limits, and the Question of a Bridge Too Far

A collaboration between Lewis McLain & AI

Leaders inherit messes. They step into offices burdened by deferred maintenance, ignored threats, regulatory capture, and systems quietly bent by special interests. In such a world, passivity does not preserve stability; it preserves neglect. Action becomes the moral baseline, not the exception. The enduring civic question is not whether leaders should push, but how far pushing remains stewardship rather than overreach.

The ancient Greek civic pledge offers a compass: leave the city better than you found it. Public life is stewardship across generations. Authority exists to repair what neglect erodes and to confront what avoidance normalizes. The statesman acts not for comfort, but for continuity—aware that problems ignored do not stay small.

This is where leadership grows hard. Entrenched interests organize precisely because complexity protects them. Manipulation thrives in delay. Incentives reward stasis. Gentle pressure rarely unwinds decades of avoidance. Leaders who push against these forces often look abrasive in real time, not because ego drives them, but because reform disturbs equilibria that were never healthy to begin with.

The phrase “a bridge too far” sharpens this tension. It enters common language through Cornelius Ryan’s account of Operation Market Garden in A Bridge Too Far. The plan is bold and morally urgent—end the war sooner, save lives—but it asks reality to cooperate with optimism. One bridge lies just beyond what logistics, intelligence, and time can support. The failure is not daring; it is miscalculation. The lesson is not “do nothing.” It is “know the load.”

Applied to leadership, the metaphor cuts both ways. Societies stagnate when leaders merely manage decline. Yet institutions exist for reasons that are not always cynical. Some limits preserve legitimacy, trust, and continuity—the invisible infrastructure of a functioning republic. The craft of leadership lies in distinguishing protective limits from self-serving barriers, then pressing the latter without snapping the former.

Seen through this lens, modern leaders often operate in the present tense of pressure. They test boundaries, confront norms, and treat friction as evidence of movement. That posture can be corrective when systems have grown complacent. It can also be hazardous when escalation outruns institutional capacity or public trust. A bridge does not fail the first time it is stressed; it fails after stress becomes routine.

This is where Donald Trump enters the conversation—not as verdict, but as caution. Trump governs with explicit confrontation. He challenges norms openly, personalizes conflict, and compresses long-delayed debates into immediate contests. Supporters see overdue action against captured systems. Critics see erosion of the trust that makes systems work at all. Both readings coexist because the pressure is real and the inheritance is heavy.

The wondering question is not whether such pressure is justified—it often is—but whether its sequencing and tone preserve the very institutions meant to be improved. The post-election period after 2020 brings the metaphor into focus. Legal challenges proceed as allowed; courts rule; states certify. Rhetoric, however, accelerates beyond evidence, and persuasion shades toward insistence. The bridge becomes visible. Not crossed decisively, but clearly approached. The risk is not a single act; it is precedent—teaching future leaders that legitimacy can be strained without immediate collapse.

January 6 stands as a symbolic edge of that bridge. Whatever one concludes about intent, the episode reveals an old truth: rhetoric travels faster than control. When foundational processes are publicly contested, leaders cannot always govern how followers translate suspicion into action. The system endures—but at a cost to shared reality.

None of this denies the core point: leaders given a boatload of neglect are not obligated to be passive. Improvement demands pressure. But the Greek ideal pairs strength with sophrosyne—measured restraint guided by wisdom. The city is left better not by humiliating institutions, but by restoring their purpose; not by replacing trust with loyalty to a person, but by renewing confidence in processes that outlast any one leader.

So what does leadership require in a world of manipulation and special interests?

It requires action, because neglect compounds.
It requires push, because stagnation corrodes.
It requires listening, because limits exist for reasons.
It requires calibration, because strength without proportion becomes its own form of neglect.

A bridge too far is rarely obvious in the moment. It announces itself later—through fragility, cynicism, or precedent. The enduring task of leadership is to cross the bridges that must be crossed, stop short of those that should not, and leave the city—tested, repaired, and steadier—better than it was found.

Artificial Intelligence in City Government: From Adoption to Accountability

A Practical Framework for Innovation, Oversight, and Public Trust

A collaboration between Lewis McLain & AI – A Companion to the previous blog on AI

Artificial intelligence has moved from novelty to necessity in public institutions. What began as experimental tools for drafting documents or summarizing data is now embedded in systems that influence budgeting, service delivery, enforcement prioritization, procurement screening, and public communication. Cities are discovering that AI is no longer optional—but neither is governance.

This essay unifies two truths that are often treated as competing ideas but must now be held together:

  1. AI adoption is inevitable and necessary if cities are to remain operationally effective and fiscally sustainable.
  2. AI oversight is now unavoidable wherever systems influence decisions affecting people, rights, or public trust.

These are not contradictions. They are sequential realities. Adoption without governance leads to chaos. Governance without adoption leads to irrelevance. The task for modern city leadership is to do both—intentionally.

I. The Adoption Imperative: AI as Municipal Infrastructure

Cities face structural pressures that are not temporary: constrained budgets, difficulty recruiting and retaining staff, growing service demands, and rising analytical complexity. AI tools offer a way to expand institutional capacity without expanding payrolls at the same rate.

Common municipal uses already include:

  • Drafting ordinances, reports, and correspondence
  • Summarizing public input and staff analysis
  • Forecasting revenues, expenditures, and service demand
  • Supporting customer service through chat or triage tools
  • Enhancing internal research and analytics

In this sense, AI is not a gadget. It is infrastructure, comparable to ERP systems, GIS, or financial modeling platforms. Cities that delay adoption will find themselves less capable, less competitive, and more expensive to operate.

Adoption, however, is not merely technical. AI reshapes workflows, compresses tasks, and changes how work is performed. Over time, this may alter staffing needs. The question is not whether AI will change city operations—it already is. The question is whether those changes are guided or accidental.

II. The Oversight Imperative: Why Governance Is Now Required

As AI systems move beyond internal productivity and begin to influence decisions—directly or indirectly—oversight becomes essential.

AI systems are now used, or embedded through vendors, in areas such as:

  • Permit or inspection prioritization
  • Eligibility screening for programs or services
  • Vendor risk scoring and procurement screening
  • Enforcement triage
  • Public safety analytics

When AI recommendations shape outcomes, even if a human signs off, accountability cannot be vague. Errors at scale, opaque logic, and undocumented assumptions create legal exposure and erode public trust faster than traditional human error.

Oversight is required because:

  • Scale magnifies mistakes: a single flaw can affect thousands before detection.
  • Opacity undermines legitimacy: residents are less forgiving of decisions they cannot understand.
  • Legal scrutiny is increasing: courts and legislatures are paying closer attention to algorithmic decision-making.

Oversight is not about banning AI. It is about ensuring AI is used responsibly, transparently, and under human control.

III. Bridging Adoption and Oversight: A Two-Speed Framework

The tension between “move fast” and “govern carefully” dissolves once AI uses are separated by risk.

Low-Risk, Internal AI Uses

Examples include drafting, summarization, forecasting, research, and internal analytics.

Approach:
Adopt quickly, document lightly, train staff, and monitor outcomes.

Decision-Adjacent or High-Risk AI Uses

Examples include enforcement prioritization, eligibility determinations, public safety analytics, and procurement screening affecting vendors.

Approach:
Require review, documentation, transparency, and meaningful human oversight before deployment.

This two-speed framework allows cities to capture productivity benefits immediately while placing guardrails only where risk to rights, equity, or trust is real.

IV. Texas Context: Statewide Direction on AI Governance

The Texas Legislature reinforced this balanced approach through the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026. The law does not prohibit AI use. Instead, it establishes expectations for transparency, accountability, and prohibited practices—particularly for government entities.

Key elements include:

  • Disclosure when residents interact with AI systems
  • Prohibitions on social scoring by government
  • Restrictions on discriminatory AI use
  • Guardrails around biometric and surveillance applications
  • Civil penalties for unlawful or deceptive deployment
  • Creation of a statewide Artificial Intelligence Council

The message is clear: Texas expects governments to adopt AI responsibly—neither recklessly nor fearfully.

V. Implications for Cities and Transit Agencies

Cities are already using AI, often unknowingly, through vendor-provided software. Transit agencies face elevated exposure because they combine finance, enforcement, surveillance, and public safety.

The greatest risk is not AI itself, but uncontrolled AI:

  • Vendor-embedded algorithms without disclosure
  • No documented human accountability
  • No audit trail
  • No process for suspension or correction

Cities that act early reduce legal risk, preserve public trust, and maintain operational flexibility.

VI. Workforce Implications: Accurate and Defensible Language

AI will change how work is done over time. It would be inaccurate and irresponsible to claim otherwise.

At the same time, AI does not mandate immediate workforce reductions. In public institutions, workforce impacts—if they occur—are most likely to happen gradually through:

  • Attrition
  • Reassignment
  • Retraining
  • Role redesign

Final staffing decisions remain with City leadership and City Council. AI is a tool for improving capacity and sustainability, not an automatic trigger for reductions.

Conclusion: Coherent, Accountable AI

AI adoption without governance invites chaos. Governance without adoption invites stagnation. Cities that succeed will do both—moving quickly where risk is low and governing carefully where risk is high.

This is not about technology hype. It is about institutional competence in a digital age.


Appendix 1 — Texas Responsible Artificial Intelligence Governance Act (HB 149)

Legislature Online

                                                   H.B. No. 149

AN ACT

relating to regulation of the use of artificial intelligence systems in this state; providing civil penalties.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:

SECTION 1.  This Act may be cited as the Texas Responsible Artificial Intelligence Governance Act.

SECTION 2.  Section 503.001, Business & Commerce Code, is amended by amending Subsections (a) and (e) and adding Subsections (b-1) and (f) to read as follows:

(a)  In this section:

(1)  “Artificial intelligence system” has the meaning assigned by Section 551.001.

(2)  “Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.

(b-1)  For purposes of Subsection (b), an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate.

(e)  This section does not apply to:

(1)  voiceprint data retained by a financial institution or an affiliate of a financial institution, as those terms are defined by 15 U.S.C. Section 6809;

(2)  the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; or

(3)  the development or deployment of an artificial intelligence model or system for the purposes of:

(A)  preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;

(B)  preserving the integrity or security of a system; or

(C)  investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.

(f)  If a biometric identifier captured for the purpose of training an artificial intelligence system is subsequently used for a commercial purpose not described by Subsection (e), the person possessing the biometric identifier is subject to:

(1)  this section’s provisions for the possession and destruction of a biometric identifier; and

(2)  the penalties associated with a violation of this section.

SECTION 3.  Section 541.104(a), Business & Commerce Code, is amended to read as follows:

(a)  A processor shall adhere to the instructions of a controller and shall assist the controller in meeting or complying with the controller’s duties or requirements under this chapter, including:

(1)  assisting the controller in responding to consumer rights requests submitted under Section 541.051 by using appropriate technical and organizational measures, as reasonably practicable, taking into account the nature of processing and the information available to the processor;

(2)  assisting the controller with regard to complying with requirements relating to the security of processing personal data, and if applicable, the personal data collected, stored, and processed by an artificial intelligence system, as that term is defined by Section 551.001, and to the notification of a breach of security of the processor’s system under Chapter 521, taking into account the nature of processing and the information available to the processor; and

(3)  providing necessary information to enable the controller to conduct and document data protection assessments under Section 541.105.

SECTION 4.  Title 11, Business & Commerce Code, is amended by adding Subtitle D to read as follows:

SUBTITLE D.  ARTIFICIAL INTELLIGENCE PROTECTION

CHAPTER 551.  GENERAL PROVISIONS

Sec. 551.001.  DEFINITIONS.  In this subtitle:

(1)  “Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

(2)  “Consumer” means an individual who is a resident of this state acting only in an individual or household context.  The term does not include an individual acting in a commercial or employment context.

(3)  “Council” means the Texas Artificial Intelligence Council established under Chapter 554.

Sec. 551.002.  APPLICABILITY OF SUBTITLE.  This subtitle applies only to a person who:

(1)  promotes, advertises, or conducts business in this state;

(2)  produces a product or service used by residents of this state; or

(3)  develops or deploys an artificial intelligence system in this state.

Sec. 551.003.  CONSTRUCTION AND APPLICATION OF SUBTITLE.  This subtitle shall be broadly construed and applied to promote its underlying purposes, which are to:

(1)  facilitate and advance the responsible development and use of artificial intelligence systems;

(2)  protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems;

(3)  provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and

(4)  provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.

CHAPTER 552.  ARTIFICIAL INTELLIGENCE PROTECTION

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 552.001.  DEFINITIONS.  In this chapter:

(1)  “Deployer” means a person who deploys an artificial intelligence system for use in this state.

(2)  “Developer” means a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state.

(3)  “Governmental entity” means any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state.  The term does not include:

(A)  a hospital district created under the Health and Safety Code or Article IX, Texas Constitution; or

(B)  an institution of higher education, as defined by Section 61.003, Education Code, including any university system or any component institution of the system.

Sec. 552.002.  CONSTRUCTION OF CHAPTER.  This chapter may not be construed to:

(1)  impose a requirement on a person that adversely affects the rights or freedoms of any person, including the right of free speech; or

(2)  authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance.

Sec. 552.003.  LOCAL PREEMPTION.  This chapter supersedes and preempts any ordinance, resolution, rule, or other regulation adopted by a political subdivision regarding the use of artificial intelligence systems.

SUBCHAPTER B. DUTIES AND PROHIBITIONS ON USE OF ARTIFICIAL INTELLIGENCE

Sec. 552.051.  DISCLOSURE TO CONSUMERS.  (a)  In this section, “health care services” means services related to human health or to the diagnosis, prevention, or treatment of a human disease or impairment provided by an individual licensed, registered, or certified under applicable state or federal law to provide those services.

(b)  A governmental agency that makes available an artificial intelligence system intended to interact with consumers shall disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system.

(c)  A person is required to make the disclosure under Subsection (b) regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.

(d)  A disclosure under Subsection (b):

(1)  must be clear and conspicuous;

(2)  must be written in plain language; and

(3)  may not use a dark pattern, as that term is defined by Section 541.001.

(e)  A disclosure under Subsection (b) may be provided by using a hyperlink to direct a consumer to a separate Internet web page.

(f)  If an artificial intelligence system is used in relation to health care service or treatment, the provider of the service or treatment shall provide the disclosure under Subsection (b) to the recipient of the service or treatment or the recipient’s personal representative not later than the date the service or treatment is first provided, except in the case of emergency, in which case the provider shall provide the required disclosure as soon as reasonably possible.

Sec. 552.052.  MANIPULATION OF HUMAN BEHAVIOR.  A person may not develop or deploy an artificial intelligence system in a manner that intentionally aims to incite or encourage a person to:

(1)  commit physical self-harm, including suicide;

(2)  harm another person; or

(3)  engage in criminal activity.

Sec. 552.053.  SOCIAL SCORING.  A governmental entity may not use or deploy an artificial intelligence system that evaluates or classifies a natural person or group of natural persons based on social behavior or personal characteristics, whether known, inferred, or predicted, with the intent to calculate or assign a social score or similar categorical estimation or valuation of the person or group of persons that results or may result in:

(1)  detrimental or unfavorable treatment of a person or group of persons in a social context unrelated to the context in which the behavior or characteristics were observed or noted;

(2)  detrimental or unfavorable treatment of a person or group of persons that is unjustified or disproportionate to the nature or gravity of the observed or noted behavior or characteristics; or

(3)  the infringement of any right guaranteed under the United States Constitution, the Texas Constitution, or state or federal law.

Sec. 552.054.  CAPTURE OF BIOMETRIC DATA.  (a)  In this section, “biometric data” means data generated by automatic measurements of an individual’s biological characteristics.  The term includes a fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual.  The term does not include a physical or digital photograph or data generated from a physical or digital photograph, a video or audio recording or data generated from a video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations under the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. Section 1320d et seq.).

(b)  A governmental entity may not develop or deploy an artificial intelligence system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law.

(c)  A violation of Section 503.001 is a violation of this section.

Sec. 552.055.  CONSTITUTIONAL PROTECTION.  (a)  A person may not develop or deploy an artificial intelligence system with the sole intent for the artificial intelligence system to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution.

(b)  This section is remedial in purpose and may not be construed to create or expand any right guaranteed by the United States Constitution.

Sec. 552.056.  UNLAWFUL DISCRIMINATION.  (a)  In this section:

(1)  “Financial institution” has the meaning assigned by Section 201.101, Finance Code.

(2)  “Insurance entity” means:

(A)  an entity described by Section 82.002(a), Insurance Code;

(B)  a fraternal benefit society regulated under Chapter 885, Insurance Code; or

(C)  the developer of an artificial intelligence system used by an entity described by Paragraph (A) or (B).

(3)  “Protected class” means a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.

(b)  A person may not develop or deploy an artificial intelligence system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.

(c)  For purposes of this section, a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.

(d)  This section does not apply to an insurance entity for purposes of providing insurance services if the entity is subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance.

(e)  A federally insured financial institution is considered to be in compliance with this section if the institution complies with all federal and state banking laws and regulations.

Sec. 552.057.  CERTAIN SEXUALLY EXPLICIT CONTENT AND CHILD PORNOGRAPHY.  A person may not:

(1)  develop or distribute an artificial intelligence system with the sole intent of producing, assisting or aiding in producing, or distributing:

(A)  visual material in violation of Section 43.26, Penal Code; or

(B)  deep fake videos or images in violation of Section 21.165, Penal Code; or

(2)  intentionally develop or distribute an artificial intelligence system that engages in text-based conversations that simulate or describe sexual conduct, as that term is defined by Section 43.25, Penal Code, while impersonating or imitating a child younger than 18 years of age.

SUBCHAPTER C.  ENFORCEMENT

Sec. 552.101.  ENFORCEMENT AUTHORITY.  (a)  The attorney general has exclusive authority to enforce this chapter, except to the extent provided by Section 552.106.

(b)  This chapter does not provide a basis for, and is not subject to, a private right of action for a violation of this chapter or any other law.

Sec. 552.102.  INFORMATION AND COMPLAINTS.  The attorney general shall create and maintain an online mechanism on the attorney general’s Internet website through which a consumer may submit a complaint under this chapter to the attorney general.

Sec. 552.103.  INVESTIGATIVE AUTHORITY.  (a)  If the attorney general receives a complaint through the online mechanism under Section 552.102 alleging a violation of this chapter, the attorney general may issue a civil investigative demand to determine if a violation has occurred.  The attorney general shall issue demands in accordance with and under the procedures established under Section 15.10.

(b)  The attorney general may request from the person reported through the online mechanism, pursuant to a civil investigative demand issued under Subsection (a):

(1)  a high-level description of the purpose, intended use, deployment context, and associated benefits of the artificial intelligence system with which the person is affiliated;

(2)  a description of the type of data used to program or train the artificial intelligence system;

(3)  a high-level description of the categories of data processed as inputs for the artificial intelligence system;

(4)  a high-level description of the outputs produced by the artificial intelligence system;

(5)  any metrics the person uses to evaluate the performance of the artificial intelligence system;

(6)  any known limitations of the artificial intelligence system;

(7)  a high-level description of the post-deployment monitoring and user safeguards the person uses for the artificial intelligence system, including, if the person is a deployer, the oversight, use, and learning process established by the person to address issues arising from the system’s deployment; or

(8)  any other relevant documentation reasonably necessary for the attorney general to conduct an investigation under this section.

Sec. 552.104.  NOTICE OF VIOLATION; OPPORTUNITY TO CURE.  (a)  If the attorney general determines that a person has violated or is violating this chapter, the attorney general shall notify the person in writing of the determination, identifying the specific provisions of this chapter the attorney general alleges have been or are being violated.

(b)  The attorney general may not bring an action against the person:

(1)  before the 60th day after the date the attorney general provides the notice under Subsection (a); or

(2)  if, before the 60th day after the date the attorney general provides the notice under Subsection (a), the person:

(A)  cures the identified violation; and

(B)  provides the attorney general with a written statement that the person has:

(i)  cured the alleged violation;

(ii)  provided supporting documentation to show the manner in which the person cured the violation; and

(iii)  made any necessary changes to internal policies to reasonably prevent further violation of this chapter.

Sec. 552.105.  CIVIL PENALTY; INJUNCTION.  (a)  A person who violates this chapter and does not cure the violation under Section 552.104 is liable to this state for a civil penalty in an amount of:

(1)  for each violation the court determines to be curable or a breach of a statement submitted to the attorney general under Section 552.104(b)(2), not less than $10,000 and not more than $12,000;

(2)  for each violation the court determines to be uncurable, not less than $80,000 and not more than $200,000; and

(3)  for a continued violation, not less than $2,000 and not more than $40,000 for each day the violation continues.

(b)  The attorney general may bring an action in the name of this state to:

(1)  collect a civil penalty under this section;

(2)  seek injunctive relief against further violation of this chapter; and

(3)  recover attorney’s fees and reasonable court costs or other investigative expenses.

(c)  There is a rebuttable presumption that a person used reasonable care as required under this chapter.

(d)  A defendant in an action under this section may seek an expedited hearing or other process, including a request for declaratory judgment, if the person believes in good faith that the person has not violated this chapter.

(e)  A defendant in an action under this section may not be found liable if:

(1)  another person uses the artificial intelligence system affiliated with the defendant in a manner prohibited by this chapter; or

(2)  the defendant discovers a violation of this chapter through:

(A)  feedback from a developer, deployer, or other person who believes a violation has occurred;

(B)  testing, including adversarial testing or red-team testing;

(C)  following guidelines set by applicable state agencies; or

(D)  if the defendant substantially complies with the most recent version of the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” published by the National Institute of Standards and Technology or another nationally or internationally recognized risk management framework for artificial intelligence systems, an internal review process.

(f)  The attorney general may not bring an action to collect a civil penalty under this section against a person for an artificial intelligence system that has not been deployed.

Sec. 552.106.  ENFORCEMENT ACTIONS BY STATE AGENCIES.  (a)  A state agency may impose sanctions against a person licensed, registered, or certified by that agency for a violation of Subchapter B if:

(1)  the person has been found in violation of this chapter under Section 552.105; and

(2)  the attorney general has recommended additional enforcement by the applicable agency.

(b)  Sanctions under this section may include:

(1)  suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and

(2)  a monetary penalty not to exceed $100,000.

CHAPTER 553.  ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 553.001.  DEFINITIONS.  In this chapter:

(1)  “Applicable agency” means a department of this state established by law to regulate certain types of business activity in this state and the people engaging in that business, including the issuance of licenses and registrations, that the department determines would regulate a program participant if the person were not operating under this chapter.

(2)  “Department” means the Texas Department of Information Resources.

(3)  “Program” means the regulatory sandbox program established under this chapter that allows a person, without being licensed or registered under the laws of this state, to test an artificial intelligence system for a limited time and on a limited basis.

(4)  “Program participant” means a person whose application to participate in the program is approved and who may test an artificial intelligence system under this chapter.

SUBCHAPTER B.  SANDBOX PROGRAM FRAMEWORK

Sec. 553.051.  ESTABLISHMENT OF SANDBOX PROGRAM.  (a)  The department, in consultation with the council, shall create a regulatory sandbox program that enables a person to obtain legal protection and limited access to the market in this state to test innovative artificial intelligence systems without obtaining a license, registration, or other regulatory authorization.

(b)  The program is designed to:

(1)  promote the safe and innovative use of artificial intelligence systems across various sectors including healthcare, finance, education, and public services;

(2)  encourage responsible deployment of artificial intelligence systems while balancing the need for consumer protection, privacy, and public safety;

(3)  provide clear guidelines for a person who develops an artificial intelligence system to test systems while certain laws and regulations related to the testing are waived or suspended; and

(4)  allow a person to engage in research, training, testing, or other pre-deployment activities to develop an artificial intelligence system.

(c)  The attorney general may not file or pursue charges against a program participant for violation of a law or regulation waived under this chapter that occurs during the testing period.

(d)  A state agency may not file or pursue punitive action against a program participant, including the imposition of a fine or the suspension or revocation of a license, registration, or other authorization, for violation of a law or regulation waived under this chapter that occurs during the testing period.

(e)  Notwithstanding Subsections (c) and (d), the requirements of Subchapter B, Chapter 552, may not be waived, and the attorney general or a state agency may file or pursue charges or action against a program participant who violates that subchapter.

Sec. 553.052.  APPLICATION FOR PROGRAM PARTICIPATION.  (a)  A person must obtain approval from the department and any applicable agency before testing an artificial intelligence system under the program.

(b)  The department by rule shall prescribe the application form.  The form must require the applicant to:

(1)  provide a detailed description of the artificial intelligence system the applicant desires to test in the program, and its intended use;

(2)  include a benefit assessment that addresses potential impacts on consumers, privacy, and public safety;

(3)  describe the applicant’s plan for mitigating any adverse consequences that may occur during the test; and

(4)  provide proof of compliance with any applicable federal artificial intelligence laws and regulations.

Sec. 553.053.  DURATION AND SCOPE OF PARTICIPATION.  (a)  A program participant approved by the department and each applicable agency may test and deploy an artificial intelligence system under the program for a period of not more than 36 months.

(b)  The department may extend a test under this chapter if the department finds good cause for the test to continue.

Sec. 553.054.  EFFICIENT USE OF RESOURCES.  The department shall coordinate the activities under this subchapter and any other law relating to artificial intelligence systems to ensure efficient system implementation and to streamline the use of department resources, including information sharing and personnel.

SUBCHAPTER C.  OVERSIGHT AND COMPLIANCE

Sec. 553.101.  COORDINATION WITH APPLICABLE AGENCY.  (a)  The department shall coordinate with all applicable agencies to oversee the operation of a program participant.

(b)  The council or an applicable agency may recommend to the department that a program participant be removed from the program if the council or applicable agency finds that the program participant’s artificial intelligence system:

(1)  poses an undue risk to public safety or welfare;

(2)  violates any federal law or regulation; or

(3)  violates any state law or regulation not waived under the program.

Sec. 553.102.  PERIODIC REPORT BY PROGRAM PARTICIPANT.  (a)  A program participant shall provide a quarterly report to the department.

(b)  The report shall include:

(1)  metrics for the artificial intelligence system’s performance;

(2)  updates on how the artificial intelligence system mitigates any risks associated with its operation; and

(3)  feedback from consumers and affected stakeholders that are using an artificial intelligence system tested under this chapter.

(c)  The department shall maintain confidentiality regarding the intellectual property, trade secrets, and other sensitive information it obtains through the program.

Sec. 553.103.  ANNUAL REPORT BY DEPARTMENT.  (a)  The department shall submit an annual report to the legislature.

(b)  The report shall include:

(1)  the number of program participants testing an artificial intelligence system in the program;

(2)  the overall performance and impact of artificial intelligence systems tested in the program; and

(3)  recommendations on changes to laws or regulations for future legislative consideration.

CHAPTER 554.  TEXAS ARTIFICIAL INTELLIGENCE COUNCIL

SUBCHAPTER A.  CREATION AND ORGANIZATION OF COUNCIL

Sec. 554.001.  CREATION OF COUNCIL.  (a)  The Texas Artificial Intelligence Council is created to:

(1)  ensure artificial intelligence systems in this state are ethical and developed in the public’s best interest;

(2)  ensure artificial intelligence systems in this state do not harm public safety or undermine individual freedoms by finding issues and making recommendations to the legislature regarding the Penal Code and Chapter 82, Civil Practice and Remedies Code;

(3)  identify existing laws and regulations that impede innovation in the development of artificial intelligence systems and recommend appropriate reforms;

(4)  analyze opportunities to improve the efficiency and effectiveness of state government operations through the use of artificial intelligence systems;

(5)  make recommendations to applicable state agencies regarding the use of artificial intelligence systems to improve the agencies’ efficiency and effectiveness;

(6)  evaluate potential instances of regulatory capture, including undue influence by technology companies or disproportionate burdens on smaller innovators caused by the use of artificial intelligence systems;

(7)  evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users through the use of artificial intelligence systems;

(8)  offer guidance and recommendations to the legislature on the ethical and legal use of artificial intelligence systems;

(9)  conduct and publish the results of a study on the current regulatory environment for artificial intelligence systems;

(10)  receive reports from the Department of Information Resources regarding the regulatory sandbox program under Chapter 553; and

(11)  make recommendations for improvements to the regulatory sandbox program under Chapter 553.

(b)  The council is administratively attached to the Department of Information Resources, and the department shall provide administrative support to the council as provided by this section.

(c)  The Department of Information Resources and the council shall enter into a memorandum of understanding detailing:

(1)  the administrative support the council requires from the department to fulfill the council’s purposes;

(2)  the reimbursement of administrative expenses to the department; and

(3)  any other provisions necessary to ensure the efficient operation of the council.

Sec. 554.002.  COUNCIL MEMBERSHIP.  (a)  The council is composed of seven members as follows:

(1)  three members of the public appointed by the governor;

(2)  two members of the public appointed by the lieutenant governor; and

(3)  two members of the public appointed by the speaker of the house of representatives.

(b)  Members of the council serve staggered four-year terms, with the terms of three or four members expiring every two years.

(c)  The governor shall appoint a chair from among the members, and the council shall elect a vice chair from its membership.

(d)  The council may establish an advisory board composed of individuals from the public who possess expertise directly related to the council’s functions, including technical, ethical, regulatory, and other relevant areas.

Sec. 554.003.  QUALIFICATIONS.  Members of the council must be Texas residents and have knowledge or expertise in one or more of the following areas:

(1)  artificial intelligence systems;

(2)  data privacy and security;

(3)  ethics in technology or law;

(4)  public policy and regulation;

(5)  risk management related to artificial intelligence systems;

(6)  improving the efficiency and effectiveness of governmental operations; or

(7)  anticompetitive practices and market fairness.

Sec. 554.004.  STAFF AND ADMINISTRATION.  The council may hire an executive director and other personnel as necessary to perform its duties.

SUBCHAPTER B.  POWERS AND DUTIES OF COUNCIL

Sec. 554.101.  ISSUANCE OF REPORTS.  (a)  The council may issue reports to the legislature regarding the use of artificial intelligence systems in this state.

(b)  The council may issue reports on:

(1)  the compliance of artificial intelligence systems in this state with the laws of this state;

(2)  the ethical implications of deploying artificial intelligence systems in this state;

(3)  data privacy and security concerns related to artificial intelligence systems in this state; or

(4)  potential liability or legal risks associated with the use of artificial intelligence systems in this state.

Sec. 554.102.  TRAINING AND EDUCATIONAL OUTREACH.  The council shall conduct training programs for state agencies and local governments on the use of artificial intelligence systems.

Sec. 554.103.  LIMITATION OF AUTHORITY.  The council may not:

(1)  adopt rules or promulgate guidance that is binding for any entity;

(2)  interfere with or override the operation of a state agency; or

(3)  perform a duty or exercise a power not granted by this chapter.

SECTION 5.  Section 325.011, Government Code, is amended to read as follows:

Sec. 325.011.  CRITERIA FOR REVIEW.  The commission and its staff shall consider the following criteria in determining whether a public need exists for the continuation of a state agency or its advisory committees or for the performance of the functions of the agency or its advisory committees:

(1)  the efficiency and effectiveness with which the agency or the advisory committee operates;

(2)(A)  an identification of the mission, goals, and objectives intended for the agency or advisory committee and of the problem or need that the agency or advisory committee was intended to address; and

(B)  the extent to which the mission, goals, and objectives have been achieved and the problem or need has been addressed;

(3)(A)  an identification of any activities of the agency in addition to those granted by statute and of the authority for those activities; and

(B)  the extent to which those activities are needed;

(4)  an assessment of authority of the agency relating to fees, inspections, enforcement, and penalties;

(5)  whether less restrictive or alternative methods of performing any function that the agency performs could adequately protect or provide service to the public;

(6)  the extent to which the jurisdiction of the agency and the programs administered by the agency overlap or duplicate those of other agencies, the extent to which the agency coordinates with those agencies, and the extent to which the programs administered by the agency can be consolidated with the programs of other state agencies;

(7)  the promptness and effectiveness with which the agency addresses complaints concerning entities or other persons affected by the agency, including an assessment of the agency’s administrative hearings process;

(8)  an assessment of the agency’s rulemaking process and the extent to which the agency has encouraged participation by the public in making its rules and decisions and the extent to which the public participation has resulted in rules that benefit the public;

(9)  the extent to which the agency has complied with:

(A)  federal and state laws and applicable rules regarding equality of employment opportunity and the rights and privacy of individuals; and

(B)  state law and applicable rules of any state agency regarding purchasing guidelines and programs for historically underutilized businesses;

(10)  the extent to which the agency issues and enforces rules relating to potential conflicts of interest of its employees;

(11)  the extent to which the agency complies with Chapters 551 and 552 and follows records management practices that enable the agency to respond efficiently to requests for public information;

(12)  the effect of federal intervention or loss of federal funds if the agency is abolished;

(13)  the extent to which the purpose and effectiveness of reporting requirements imposed on the agency justifies the continuation of the requirement; [and]

(14)  an assessment of the agency’s cybersecurity practices using confidential information available from the Department of Information Resources or any other appropriate state agency; and

(15)  an assessment of the agency’s use of artificial intelligence systems, as that term is defined by Section 551.001, Business & Commerce Code, in its operations and its oversight of the use of artificial intelligence systems by persons under the agency’s jurisdiction, and any related impact on the agency’s ability to achieve its mission, goals, and objectives, made using information available from the Department of Information Resources, the attorney general, or any other appropriate state agency.

SECTION 6.  Section 2054.068(b), Government Code, is amended to read as follows:

(b)  The department shall collect from each state agency information on the status and condition of the agency’s information technology infrastructure, including information regarding:

(1)  the agency’s information security program;

(2)  an inventory of the agency’s servers, mainframes, cloud services, and other information technology equipment;

(3)  identification of vendors that operate and manage the agency’s information technology infrastructure; [and]

(4)  any additional related information requested by the department; and

(5)  an evaluation of the use or considered use of artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, by each state agency.

SECTION 7.  Section 2054.0965(b), Government Code, is amended to read as follows:

(b)  Except as otherwise modified by rules adopted by the department, the review must include:

(1)  an inventory of the agency’s major information systems, as defined by Section 2054.008, and other operational or logistical components related to deployment of information resources as prescribed by the department;

(2)  an inventory of the agency’s major databases, artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, and applications;

(3)  a description of the agency’s existing and planned telecommunications network configuration;

(4)  an analysis of how information systems, components, databases, applications, and other information resources have been deployed by the agency in support of:

(A)  applicable achievement goals established under Section 2056.006 and the state strategic plan adopted under Section 2056.009;

(B)  the state strategic plan for information resources; and

(C)  the agency’s business objectives, mission, and goals;

(5)  agency information necessary to support the state goals for interoperability and reuse; and

(6)  confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources.

SECTION 8.  Not later than September 1, 2026, the attorney general shall post on the attorney general’s Internet website the information and online mechanism required by Section 552.102, Business & Commerce Code, as added by this Act.

SECTION 9.  (a)  Notwithstanding any other section of this Act, in a state fiscal year, a state agency to which this Act applies is not required to implement a provision found in another section of this Act that is drafted as a mandatory provision imposing a duty on the agency to take an action unless money is specifically appropriated to the agency for that fiscal year to carry out that duty.  The agency may implement the provision in that fiscal year to the extent other funding is available to the agency to do so.

(b)  If, as authorized by Subsection (a) of this section, the state agency does not implement the mandatory provision in a state fiscal year, the state agency, in its legislative budget request for the next state fiscal biennium, shall certify that fact to the Legislative Budget Board and include a written estimate of the costs of implementing the provision in each year of that next state fiscal biennium.

SECTION 10.  This Act takes effect January 1, 2026.

    President of the Senate           Speaker of the House      

I certify that H.B. No. 149 was passed by the House on April 23, 2025, by the following vote:  Yeas 146, Nays 3, 1 present, not voting; and that the House concurred in Senate amendments to H.B. No. 149 on May 30, 2025, by the following vote:  Yeas 121, Nays 17, 2 present, not voting.

______________________________

Chief Clerk of the House   

I certify that H.B. No. 149 was passed by the Senate, with amendments, on May 23, 2025, by the following vote:  Yeas 31, Nays 0.

______________________________

Secretary of the Senate   

APPROVED: __________________

                 Date       

          __________________

               Governor       


Appendix 2 — Model Ordinance: Responsible Use of Artificial Intelligence in City Operations

ORDINANCE NO. ______

AN ORDINANCE

relating to the responsible use of artificial intelligence systems by the City; establishing transparency, accountability, and oversight requirements; and providing for implementation and administration.

WHEREAS,

the City recognizes that artificial intelligence (“AI”) systems are increasingly used to improve operational efficiency, service delivery, data analysis, and internal workflows; and

WHEREAS,

the City further recognizes that certain uses of AI may influence decisions affecting residents, employees, vendors, or regulated parties and therefore require appropriate oversight; and

WHEREAS,

the City seeks to encourage responsible innovation while preserving public trust, transparency, and accountability; and

WHEREAS,

the Texas Legislature has enacted the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026, establishing statewide standards for AI use by government entities; and

WHEREAS,

the City recognizes that the adoption of artificial intelligence tools may, over time, change how work is performed and how staffing needs are structured, and that any such impacts are expected to occur gradually through attrition, reassignment, or role redesign rather than immediate workforce reductions;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF __________, TEXAS:

Section 1. Definitions

For purposes of this Ordinance:

  1. “Artificial Intelligence System” means a computational system that uses machine learning, statistical modeling, or related techniques to perform tasks normally associated with human intelligence, including analysis, prediction, classification, content generation, or prioritization.
  2. “Decision-Adjacent AI” means an AI system that materially influences, prioritizes, or recommends outcomes related to enforcement, eligibility, allocation of resources, personnel actions, procurement decisions, or public services, even if final decisions are made by a human.
  3. “High-Risk AI Use” means deployment of an AI system that directly or indirectly affects individual rights, access to services, enforcement actions, or legally protected interests.
  4. “Department” means any City department, office, division, or agency.

Section 2. Permitted Use of Artificial Intelligence

(a) Internal Productivity Uses. Departments may deploy AI systems for internal productivity and analytical purposes, including but not limited to:

  • Drafting and summarization of documents
  • Data analysis and forecasting
  • Workflow automation
  • Research and internal reporting
  • Customer-service chat tools providing general information (with disclaimers as appropriate)

Such uses shall not require prior Council approval but shall be subject to internal documentation requirements.

(b) Decision-Adjacent Uses. AI systems that influence or support decisions affecting residents, employees, vendors, or regulated entities may be deployed only in accordance with Sections 3 and 4 of this Ordinance.

Section 3. Prohibited Uses

No Department shall deploy or use an AI system that:

  1. Performs social scoring of individuals or groups based on behavior, personal traits, or reputation for the purpose of denying services, benefits, or rights;
  2. Intentionally discriminates against a protected class in violation of state or federal law;
  3. Generates or deploys biometric identification or surveillance in violation of constitutional protections;
  4. Produces or facilitates unlawful deep-fake or deceptive content;
  5. Operates as a fully automated decision-making system without meaningful human review in matters affecting legal rights or obligations.

Section 4. Oversight and Approval for High-Risk AI Uses

(a) Inventory Requirement. The City Manager shall maintain a centralized AI Systems Inventory identifying:

  • Each AI system in use
  • The Department deploying the system
  • The system’s purpose
  • Whether the use is classified as high-risk

(b) Approval Process. Prior to deployment of any High-Risk AI Use, the Department must:

  1. Submit a written justification describing the system’s purpose and scope;
  2. Identify the data sources used by the system;
  3. Describe human oversight mechanisms;
  4. Obtain approval from:
    • The City Manager (or designee), and
    • The City Attorney for legal compliance review.

(c) Human Accountability. Each AI system shall have a designated human owner responsible for:

  • Monitoring performance
  • Responding to errors or complaints
  • Suspending use if risks are identified

Section 5. Transparency and Public Disclosure

(a) Disclosure to the Public. When a City AI system interacts directly with residents, the City shall provide clear notice that the interaction involves AI.

(b) Public Reporting. The City shall publish annually:

  • A summary of AI systems in use
  • The general purposes of high-risk AI systems
  • Contact information for public inquiries

No proprietary or security-sensitive information shall be disclosed.

Section 6. Procurement and Vendor Requirements

All City contracts involving AI systems shall, where applicable:

  1. Require disclosure of AI functions;
  2. Prohibit undisclosed algorithmic decision-making;
  3. Allow the City to audit or review AI system outputs relevant to City operations;
  4. Require vendors to notify the City of material changes to AI functionality.

Section 7. Review and Sunset

(a) Periodic Review. High-risk AI systems shall be reviewed at least annually to assess:

  • Accuracy
  • Bias
  • Continued necessity
  • Compliance with this Ordinance

(b) Sunset Authority. The City Manager may suspend or terminate use of any AI system that poses unacceptable risk or fails compliance review.

Section 8. Training

The City shall provide appropriate training to employees involved in:

  • Deploying AI systems
  • Supervising AI-assisted workflows
  • Interpreting AI-generated outputs

Section 9. Severability

If any provision of this Ordinance is held invalid, such invalidity shall not affect the remaining provisions.

Section 10. Effective Date

This Ordinance shall take effect immediately upon adoption.


Appendix 3 — City Manager Administrative Regulation: Responsible Use of Artificial Intelligence

ADMINISTRATIVE REGULATION NO. ___

Subject: Responsible Use of Artificial Intelligence (AI) in City Operations
Authority: Ordinance No. ___ (Responsible Use of Artificial Intelligence)
Issued by: City Manager
Effective Date: __________

1. Purpose

This Administrative Regulation establishes operational procedures for the responsible deployment, oversight, and monitoring of artificial intelligence (AI) systems used by the City, consistent with adopted Council policy and applicable state law.

The intent is to:

  • Enable rapid adoption of AI for productivity and service delivery;
  • Ensure transparency and accountability for higher-risk uses; and
  • Protect the City, employees, and residents from unintended consequences.

2. Scope

This regulation applies to all City departments, offices, and divisions that:

  • Develop, procure, deploy, or use AI systems; or
  • Rely on vendor-provided software that includes AI functionality.

3. AI System Classification

Departments shall classify AI systems into one of the following categories:

A. Tier 1 — Internal Productivity AI

Examples:

  • Document drafting and summarization
  • Data analysis and forecasting
  • Internal research and reporting
  • Workflow automation

Oversight Level:

  • Department-level approval
  • Registration in AI Inventory

B. Tier 2 — Decision-Adjacent AI

Examples:

  • Permit or inspection prioritization
  • Vendor or application risk scoring
  • Resource allocation recommendations
  • Enforcement or compliance triage

Oversight Level:

  • City Manager approval
  • Legal review
  • Annual performance review

C. Tier 3 — High-Risk AI

Examples:

  • AI influencing enforcement actions
  • Eligibility determinations
  • Public safety analytics
  • Biometric or surveillance tools

Oversight Level:

  • City Manager approval
  • City Attorney review
  • Documented human-in-the-loop controls
  • Annual audit and Council notification

4. AI Systems Inventory

The City Manager’s Office shall maintain a centralized AI Systems Inventory, which includes:

  • System name and vendor
  • Department owner
  • Purpose and classification tier
  • Date of deployment
  • Oversight requirements

Departments shall update the inventory prior to deploying any new AI system.

5. Approval Process

A. Tier 1 Systems

  • Approved by Department Director
  • Registered in inventory

B. Tier 2 and Tier 3 Systems

Departments must submit:

  1. A description of the system and intended use
  2. Data sources and inputs
  3. Description of human oversight
  4. Risk mitigation measures

Approval required from:

  • City Manager (or designee)
  • City Attorney (for legal compliance)

6. Human Oversight & Accountability

Each AI system shall have a designated System Owner responsible for:

  • Monitoring system outputs
  • Responding to errors or complaints
  • Suspending use if risks emerge
  • Coordinating audits or reviews

No AI system may operate as a fully autonomous decision-maker for actions affecting legal rights or obligations.

7. Vendor & Procurement Controls

Procurement involving AI systems shall:

  • Identify AI functionality explicitly in solicitations
  • Require vendors to disclose material AI updates
  • Prohibit undisclosed algorithmic decision-making
  • Preserve City audit and review rights

8. Monitoring, Review & Sunset

  • Tier 2 and Tier 3 systems shall undergo annual review.
  • Systems may be suspended or sunset if:
    • Accuracy degrades
    • Bias is identified
    • Legal risk increases
    • The system no longer serves a defined purpose

9. Training

Departments deploying AI shall ensure appropriate staff training covering:

  • Proper interpretation of AI outputs
  • Limitations of AI systems
  • Escalation and error-handling procedures

10. Reporting to Council

The City Manager shall provide Council with:

  • An annual summary of AI systems in use
  • Identification of Tier 3 (High-Risk) systems
  • Any material incidents or corrective actions

11. Effective Date

This Administrative Regulation is effective immediately upon issuance.

12. Workforce Considerations

The use of artificial intelligence systems may change job functions and workflows over time. Departments shall:

  • Use AI to augment employee capabilities wherever possible;
  • Prioritize retraining, reassignment, and natural attrition when workflows change;
  • Coordinate with Human Resources before deploying AI systems that materially alter job duties; and
  • Recognize that long-term staffing impacts, if any, remain subject to City Manager and City Council authority.

Appendix 4 — Public-Facing FAQ: Responsible Use of Artificial Intelligence in City Operations

What is this ordinance about?

This ordinance establishes clear rules for how the City may use artificial intelligence (AI) tools. It allows the City to use modern technology to improve efficiency and service delivery while ensuring that higher-risk uses are transparent, accountable, and overseen by people.

Is the City already using artificial intelligence?

Yes. Like most modern organizations, the City already uses limited AI-enabled tools for tasks such as document drafting, data analysis, customer service support, and vendor-provided software systems.

This ordinance ensures those tools are used consistently and responsibly.

Is this ordinance banning artificial intelligence?

No.
The ordinance does not ban AI. It encourages responsible adoption of AI for productivity and internal efficiency while placing guardrails on uses that could affect people’s rights or access to services.

Why is the City adopting rules now?

AI tools are becoming more common and more capable. Clear rules help ensure:

  • Transparency in how AI is used
  • Accountability for outcomes
  • Compliance with new Texas law
  • Public trust in City operations

The Texas Legislature recently enacted statewide standards for AI use by government entities, and this ordinance aligns the City with those expectations.

Will artificial intelligence affect City jobs?

AI may change how work is done over time, just as previous technologies have.

This ordinance does not authorize immediate workforce reductions. Any long-term impacts are expected to occur gradually and, where possible, through:

  • Natural attrition
  • Reassignment
  • Retraining
  • Changes in job duties

Final staffing decisions remain with City leadership and City Council.

Will AI replace City employees?

AI tools are intended to assist employees, not replace human judgment. For higher-risk uses, the ordinance requires meaningful human oversight and accountability.

Can AI make decisions about me automatically?

No.
The ordinance prohibits fully automated decision-making that affects legal rights, enforcement actions, or access to services without human review.

AI may provide information or recommendations, but people remain responsible for decisions.

Will the City use AI for surveillance or facial recognition?

The ordinance prohibits AI uses that violate constitutional protections, including improper biometric surveillance.

Any use of biometric or surveillance-related AI would require strict legal review and compliance with state and federal law.

How will I know if I’m interacting with AI?

If the City uses AI systems that interact directly with residents, the City must clearly disclose that you are interacting with an AI system.

Does this apply to police or public safety?

Yes.
AI tools used in public safety contexts are considered higher-risk and require additional review, approval, and oversight. AI systems may not independently make enforcement decisions.

Who is responsible if an AI system makes a mistake?

Each AI system has a designated City employee responsible for monitoring its use, addressing errors, and suspending the system if necessary.

Responsibility remains with the City—not the software.

Will the public be able to see how AI is used?

Yes.
The City will publish an annual summary describing:

  • The types of AI systems in use
  • Their general purpose
  • How residents can ask questions or raise concerns

Sensitive or proprietary information will not be disclosed.

Does this create a new board or bureaucracy?

No.
Oversight is handled through existing City leadership and administrative structures.

Is there a cost to adopting this ordinance?

There is no direct cost associated with adoption. Over time, responsible AI use may help control costs by improving productivity and efficiency.

How often will this policy be reviewed?

Higher-risk AI systems are reviewed annually. The ordinance itself may be updated as technology and law evolve.

Who can I contact with questions or concerns?

Residents may contact the City Manager’s Office or submit inquiries through the City’s website. Information on AI use and reporting channels will be publicly available.

Bottom Line

This ordinance ensures the City:

  • Uses modern tools responsibly
  • Maintains human accountability
  • Protects public trust
  • Aligns with Texas law
  • Adapts thoughtfully to technological change

An Update on Drone Uses in Texas Municipalities

A second collaboration between Lewis McLain & AI

From Tactical Tools to a Quiet Redefinition of First Response

A decade ago, a municipal drone program in Texas usually meant a small team, a locked cabinet, and a handful of specially trained officers who were called out when circumstances justified it. The drone was an accessory—useful, sometimes impressive, but peripheral to the ordinary rhythm of public safety.

That is no longer the case.

Across Texas, drones are being absorbed into the daily mechanics of emergency response. In a growing number of cities, they are no longer something an officer brings to a scene. They are something the city sends—often before the first patrol car, engine, or ambulance has cleared an intersection.

This shift is subtle, technical, and easily misunderstood. But it represents one of the most consequential changes in municipal public safety design in a generation.


The quiet shift from tools to systems

The defining change is not better cameras or longer flight times. It is program design.

Early drone programs were built around people: pilots, certifications, and equipment checklists. Today’s programs are built around systems—launch infrastructure, dispatch logic, real-time command centers, and policies that define when a drone may be used and, just as importantly, when it may not.

Cities like Arlington illustrate this evolution clearly. Arlington’s drones are not stored in trunks or deployed opportunistically. They launch from fixed docking stations, controlled through the city’s real-time operations center, and are sent to calls the way any other responder would be. The drone’s role is not to replace officers, but to give them something they rarely had before arrival: certainty.

Is someone actually inside the building? Is the suspect still there? Is the person lying in the roadway injured or already moving? These are small questions, but they shape everything that follows. In many cases, the presence of a drone overhead resolves a situation before physical contact ever occurs.

That pattern—early information reducing risk—is now being repeated, in different forms, across the state.


North Texas as an early laboratory

In North Texas, the progression from experimentation to normalization is especially visible.

Arlington’s program has become a reference point, not because it is flashy, but because it works. Drones are treated as routine assets, subject to policy, supervision, and after-action review. Their value is measured in response times and avoided escalations, not in flight hours.

Nearby, Dallas is navigating a more complex path. Dallas already operates one of the most active municipal drone programs in the state, but scale changes everything. Dense neighborhoods, layered airspace, multiple airports, and heightened civil-liberties scrutiny mean that Dallas cannot simply replicate what smaller cities have done.

Instead, Dallas appears to be doing something more consequential: deliberately embedding “Drone as First Responder” capability into its broader public-safety technology framework. Procurement language and public statements now describe drones verifying caller information while officers respond—a quiet but important acknowledgement that drones are becoming part of the dispatch process itself. If Dallas succeeds, it will establish a model for large, complex cities that have so far watched DFR from a distance.

Smaller cities have moved faster.

Prosper, for example, has embraced automation as a way to overcome limited staffing and long travel distances. Its program emphasizes speed—sub-two-minute arrivals made possible by automated docking stations that handle charging and readiness without human intervention. Prosper’s experience suggests that cities do not have to grow into DFR gradually; some can leap directly to system-level deployment.

Cities like Euless represent another important strand of adoption. Their programs are smaller, more cautious, and intentionally bounded. They launch drones to specific call types, collect experience, and adjust policy as they go. These cities matter because they demonstrate how DFR spreads laterally, city by city, through observation and imitation rather than mandates or statewide directives.


South Texas and the widening geography of DFR

DFR is not a North Texas phenomenon.

In the Rio Grande Valley, Edinburg has publicly embraced dispatch-driven drone response for crashes, crimes in progress, and search-and-rescue missions, including night operations using thermal imaging. In regions where heat, terrain, and distance complicate traditional response, the value of rapid aerial awareness is obvious.

Further west, Laredo has framed drones as part of a broader rapid-response network rather than a narrow policing tool. Discussions there extend beyond observation to include overdose response and medical support, pointing toward a future where drones do more than watch—they enable intervention while ground units close the gap.

Meanwhile, cities like Pearland have quietly done the hardest work of all: making DFR ordinary. Pearland’s early focus on remote operations and program governance is frequently cited by other cities, even when it draws little public attention. Its lesson is simple but powerful: the more boring a drone program becomes, the more likely it is to scale.


What 2026 will likely bring

By 2026, Texas municipalities will no longer debate drones in abstract terms. The conversation will shift to coverage, performance, and restraint.

City leaders will ask how much of their jurisdiction can be reached within two or three minutes, and what it costs to achieve that standard. DFR coverage maps will begin to resemble fire-station service areas, and response-time percentiles will replace anecdotal success stories.

Dispatch ownership will matter more than pilot skill. The most successful programs will be those in which drones are managed as part of the call-taking and response ecosystem, not as specialty assets waiting for permission. Pilots will become supervisors of systems, not just operators of aircraft.

At the same time, privacy will increasingly determine the pace of expansion. Cities that define limits early—what drones will never be used for, how long video is kept, who can access it—will move faster and with less friction. Those that delay these conversations will find themselves stalled, not by technology, but by public distrust.

Federal airspace rules will continue to separate tactical programs from scalable ones. Dense metro areas will demand more sophisticated solutions—automated docks, detect-and-avoid capabilities, and carefully designed flight corridors. The cities that solve these problems will not just have better drones; they will have better systems.

And perhaps most telling of all, drones will gradually fade from public conversation. When residents stop noticing them—when a drone overhead is no more remarkable than a patrol car passing by—the transformation will be complete.


A closing thought

Texas cities are not adopting drones because they are fashionable or futuristic. They are doing so because time matters, uncertainty creates risk, and early information saves lives—sometimes by prompting action, and sometimes by preventing it.

By 2026, the question will not be whether drones belong in municipal public safety. It will be why any city, given the chance to act earlier and safer, would choose not to.


Looking Ahead to 2026: When Drones Become Ordinary

By 2026, the most telling sign of success for municipal drone programs in Texas will not be innovation, expansion, or even capability. It will be normalcy.

The early years of public-safety drones were marked by novelty. A drone launch drew attention, generated headlines, and often triggered anxiety about surveillance or overreach. That phase is already fading. What is emerging in its place is quieter and far more consequential: drones becoming an assumed part of the response environment, much like radios, body cameras, or computer-aided dispatch systems once did.

The conversation will no longer revolve around whether a city has drones. Instead, it will focus on coverage and performance. City leaders will ask how quickly aerial eyes can reach different parts of the city, how often drones arrive before ground units, and what percentage of priority calls benefit from early visual confirmation. Response-time charts and service-area maps will replace anecdotes and demonstrations. In this sense, drones will stop being treated as technology and start being treated as infrastructure.

This shift will also clarify responsibility. The most mature programs will no longer center on individual pilots or specialty units. Ownership will move decisively toward dispatch and real-time operations centers. Drones will be launched because a call meets predefined criteria, not because someone happens to be available or enthusiastic. Pilots will increasingly function as system supervisors, ensuring compliance, safety, and continuity, rather than as hands-on operators for every flight.

At the same time, restraint will become just as important as reach. Cities that succeed will be those that articulate, early and clearly, what drones are not for. By 2026, residents will expect drone programs to come with explicit boundaries: no routine patrols, no generalized surveillance, no silent expansion of mission. Programs that fail to define those limits will find themselves stalled, regardless of how capable the technology may be.

Federal airspace rules and urban complexity will further separate casual programs from durable ones. Large cities will discover that scaling drones is less about buying more aircraft and more about solving coordination problems—airspace, redundancy, automation, and integration with other systems. The cities that work through those constraints will not just fly more often; they will fly predictably and defensibly.

And then, gradually, the attention will drift away.

When a drone arriving overhead is no longer remarkable—when it is simply understood as one of the first tools a city sends to make sense of an uncertain situation—the transition will be complete. The public will not notice drones because they will no longer symbolize change. They will symbolize continuity.

That is the destination Texas municipalities are approaching: not a future where drones dominate public safety, but one where they quietly support it—reducing uncertainty, improving judgment, and often preventing escalation precisely because they arrive early and ask the simplest question first: What is really happening here?

By 2026, the most advanced drone programs in Texas will not feel futuristic at all. They will feel inevitable.

How Could the Minnesota Fraud Happen — and Why Texas Didn’t See the Same Outcome

A collaboration between Lewis McLain & AI

The recent revelation that federal prosecutors believe up to half of roughly $18 billion in federal funds administered through Minnesota programs may have been fraudulently claimed has raised a deeper and more troubling question than simple criminal wrongdoing. The central issue is not whether fraud occurred — it clearly did — but how such a vast scheme could persist for years without decisive intervention, and why similar failures did not reach the same scale in other states, particularly Texas.

Answering that question requires stepping away from partisan framing and examining program design, administrative architecture, timing of awareness, and institutional decision-making.


I. The Nature of the Programs Involved

Most of the funds at issue flowed through federally funded, state-administered social service programs, including:

  • Child nutrition programs
  • Medicaid-related services (including autism therapy and home-based supports)
  • Housing and disability assistance

These programs share several structural features:

  1. Claim-based reimbursement
    Providers self-report services and are reimbursed automatically.
  2. Pay-first, audit-later design
    Verification occurs months or years after funds are disbursed.
  3. Private delivery model
    States administer eligibility and payment, but do not deliver services directly.

This structure prioritizes speed, access, and continuity of care, particularly for vulnerable populations. It also creates an inherent vulnerability: fraud can scale faster than oversight.


II. What Was the Same Across States

Minnesota’s experience was not unique in its basic mechanics. Similar fraud dynamics appeared in California, New York, Illinois, and federal pandemic programs.

Across all jurisdictions:

  • Emergency COVID waivers loosened documentation and oversight
  • Provider enrollment was expedited
  • Site visits and in-person verification were suspended
  • Payment systems remained automated

Fraud exploited time gaps, not policy intent. These systems were designed to avoid denying care — not to stop sophisticated abuse in real time.


III. Where Minnesota Was Different

Minnesota’s case diverged from other states in three critical ways.

1. Scale and concentration

Other states experienced:

  • Thousands of small or mid-sized fraud cases
  • Losses spread across geography and programs

Minnesota experienced:

  • Highly organized networks
  • Multi-program overlap
  • Extraordinary dollar concentration per scheme

Federal prosecutors described the activity as “industrial-scale fraud”, not opportunistic abuse.


2. Early warnings before peak losses

Unlike many states where fraud was discovered after funds were gone, Minnesota agencies:

  • Flagged suspicious activity as early as 2019–2020
  • Documented implausible service volumes
  • Raised concerns internally and to federal partners

In the Feeding Our Future case — the catalyst for the broader investigation — state officials attempted to halt funding, triggering litigation that slowed enforcement. Payments continued while warning signs mounted.

This is a critical distinction: Minnesota saw the smoke before the fire peaked.


3. Fragmented authority

Minnesota’s human-services system is highly decentralized:

  • Provider approval, payment, audit, and enforcement are split across agencies
  • Counties and nonprofits operate with significant autonomy
  • Courts can limit administrative action during disputes

No single entity had both the authority and speed to stop payments decisively once fraud was suspected.


IV. When the Administration Became Aware — and How

The timeline matters.

  • 2019–early 2020: Program staff note irregular claims
  • Summer 2020: State agencies formally report concerns to federal partners
  • Late 2020: State attempts to terminate funding; litigation intervenes
  • February 2021: Referral to the FBI; federal criminal investigation begins
  • January 2022: FBI raids and indictments become public
  • 2022–2025: Investigation expands across multiple programs, revealing the larger scope

Senior state leadership was aware of suspected fraud well before public disclosure, but precise documentation of when the governor’s office was formally briefed remains unclear in the public record.

What is clear is that awareness preceded full intervention, and intervention lagged the growth of the schemes.


V. Why This Did Not Dominate the 2024 Election

Despite early knowledge within agencies, the issue did not meaningfully shape the 2024 election for several reasons:

  1. The full scale was not publicly known
    The $18 billion figure emerged only in late 2025.
  2. Early cases appeared isolated
    Feeding Our Future (~$300 million) looked large but contained.
  3. Complexity discouraged amplification
    The story lacked a simple narrative during a crowded election cycle.
  4. Investigations were ongoing
    Media and campaigns avoid claims not yet fully adjudicated.

By the time the magnitude became undeniable, the election had passed.


VI. Comparison to Texas: Same Programs, Different Outcomes

Texas administers the same federal programs — yet did not experience Minnesota-scale losses. The difference lies in governance design, not moral superiority.

1. Centralized authority

Texas operates through a strongly centralized Health and Human Services Commission. Provider enrollment, payment, and termination authority are consolidated.

Result: Payments can be halted quickly.


2. Provider enrollment rigor

Texas imposes:

  • Lengthy onboarding
  • Fingerprinting and ownership scrutiny
  • Financial viability checks

This slows access — and blocks shell entities.


3. Willingness to disrupt services

Texas is institutionally willing to:

  • Suspend providers first
  • Litigate later
  • Accept short-term service disruption

Minnesota showed greater hesitation, prioritizing continuity and legal caution.


4. Enforcement posture

Texas uses:

  • An aggressive Medicaid Fraud Control Unit
  • Early Attorney General involvement
  • Parallel civil and criminal actions

Fraud is treated as law enforcement first, not program management.


5. Blunt controls over elegant analytics

Texas relies on:

  • Hard caps
  • Billing thresholds
  • Manual overrides

The system is crude — but constraining. Minnesota relied more on trust and review.


VII. The Tradeoff at the Core

The contrast reveals a fundamental governance choice:

  • Minnesota prioritized access, trust, and decentralization
  • Texas prioritized control, authority, and risk tolerance

Neither model is clean. Both have costs. Only one prevented runaway scale.


VIII. What This Case Ultimately Reveals

This was not a failure of compassion, nor evidence of coordinated state wrongdoing. It was a failure of system architecture.

Modern aid systems that optimize for:

  • Speed
  • Equity
  • Access

must also invest in:

  • Real-time anomaly detection
  • Unified authority
  • Rapid payment suspension powers

Without those, fraud will always scale faster than oversight.


Conclusion

Minnesota did not invent fraud, and Texas did not eliminate it. The difference lies in how quickly each system can say “stop” when something goes wrong.

Minnesota saw the warning signs — but lacked the integrated authority to act decisively. Texas acts decisively — sometimes harshly — and accepts the consequences.

That is the real lesson of the Minnesota case: not who failed morally, but which systems are structurally capable of stopping abuse once it begins.

Texas Local Government: Sovereignty, Delegation, Fragmentation, and the State’s Return to Planning

A collaboration between Lewis McLain & AI

Only Two Sovereigns

Any serious discussion of Texas local government must begin with a foundational constitutional fact:

In the United States, there are only two levels of sovereign government:
the federal government and the states.

That is the full list.

Counties, cities, school districts, special districts, authorities, councils, boards, and commissions are not sovereign. They possess no inherent authority. They exist only because a state legislature has chosen to delegate specific powers to them, and those powers may be expanded, limited, preempted, reorganized, or withdrawn entirely.

Texas local government is therefore not a story of decentralization.
It is a story of delegated administration, followed—inevitably—by state-directed coordination when delegation produced excessive fragmentation.


The State of Texas as Sovereign and System Designer

The State of Texas is sovereign within its constitutional sphere. That sovereignty includes the authority to:

  • Create local governments
  • Define and limit their powers
  • Redraw or freeze their boundaries
  • Preempt their ordinances
  • Reorganize or abolish them

Local governments are not junior partners in sovereignty. They are instruments through which the state governs a vast and diverse territory.

From the beginning, Texas made a defining structural choice:
rather than consolidate government as complexity increased, it would delegate narrowly, preserve local identity, and retain sovereignty at the state level. That choice explains the layered system that followed.


Counties: The First Subdivision of State Power

Counties were Texas’s original subdivision of state authority, adopted after independence and statehood from Anglo-American legal traditions.

They were designed for a frontier world:

  • Sparse population
  • Horseback travel
  • Local courts
  • Recordkeeping
  • Elections
  • Law enforcement

During the 19th century, Texas rapidly carved itself into counties so residents could reach a county seat in roughly a day’s travel. By the early 20th century, the county map had largely frozen at 254 counties, a number that remains unchanged today.

Counties are constitutional entities, but they are governed strictly by Dillon’s Rule. They have no inherent powers, no residual authority, and little flexibility to adapt structurally. Once the county map was locked in place, counties became increasingly mismatched to Texas’s urbanizing reality—too small in some areas, too weak in others, and too rigid everywhere.

Rather than consolidate counties, Texas chose to work around them.


Dillon’s Rule: The Legal Engine of Delegation

The doctrine that made this system possible is Dillon’s Rule, named after John Forrest Dillon (1831–1914), Chief Justice of the Iowa Supreme Court and later a professor at Columbia Law School. His 1872 treatise, Commentaries on the Law of Municipal Corporations, emerged during a period of explosive city growth and widespread municipal corruption.

Dillon rejected the notion that local governments possessed inherent authority. He articulated a rule designed to preserve state supremacy:

A local government may exercise only
(1) powers expressly granted by the legislature,
(2) powers necessarily implied from those grants, and
(3) powers essential to its declared purpose—not merely convenient, but indispensable.
Any reasonable doubt is resolved against the local government.

Texas did not merely adopt Dillon’s Rule; it embedded it structurally. Counties, special districts, ISDs, and authorities operate squarely under Dillon’s Rule. Even cities escape it only partially through home-rule charters, and only to the extent the Legislature allows.

Dillon’s Rule explains why Texas governance favors many narrow entities over few powerful ones.


Cities: Delegated Urban Management, Not Local Sovereignty

As towns grew denser, counties proved incapable of providing urban services. The state responded by authorizing cities to manage:

  • Police and fire protection
  • Streets and utilities
  • Zoning and land use
  • Local infrastructure

Cities are therefore delegated urban managers, not sovereign governments.

Texas later adopted home-rule charters to give larger cities greater flexibility, but home rule is widely misunderstood. It does not reverse Dillon’s Rule. It merely allows cities to act unless prohibited—while preserving the Legislature’s power to preempt, override, or limit local authority at any time.

Recent state preemption is not a breakdown of the system. It is the system operating as designed.


Independent School Districts: Function Over Geography

Education exposed the limits of place-based governance earlier than any other function.

Counties were too uneven.
Cities were too political.
Education required stability, long planning horizons, and uniform oversight.

Texas responded by removing education from both counties and cities and creating Independent School Districts.

ISDs are:

  • Single-purpose governments
  • Granted independent taxing authority
  • Authorized to issue bonds
  • Subject to state curriculum and accountability mandates

ISDs do not answer to cities or counties. They answer directly to the state. This was one of Texas’s earliest and clearest moves toward functional specialization over territorial governance.


Special Districts: Precision Instead of Consolidation

As Texas industrialized and urbanized in the 20th century, the Legislature faced increasingly specific problems:

  • Flood control
  • Water supply
  • Drainage
  • Fire protection
  • Hospitals
  • Ports and navigation

Rather than expand general-purpose governments, Texas created special districts—single-mission entities with narrow authority and dedicated funding streams.

Special districts are not accidental inefficiencies. They reflect a deliberate state preference:

Solve problems with precision, not with consolidation.

The result was effectiveness and speed, at the cost of growing fragmentation.


MUDs and Authorities: Growth and Risk as State Policy

Municipal Utility Districts and authorities are often mistaken for private or quasi-private entities. Legally, they are governments.

MUDs:

  • Are created under state law
  • Levy taxes
  • Issue bonds
  • Are governed by elected boards
  • Provide essential infrastructure

They allow the state to:

  • Enable development before cities arrive
  • Finance infrastructure without municipal debt
  • Shift costs to future residents
  • Avoid restructuring counties

Similarly, transit authorities, toll authorities, housing authorities, and local government corporations exist to isolate risk, bypass constitutional debt limits, and accelerate projects. These are not loopholes. They are state-designed instruments.


The Consequence: Functional Fragmentation

By the mid-20th century, Texas governance had become highly functional—and deeply fragmented:

  • Fixed counties
  • Expanding cities
  • Independent ISDs
  • Thousands of special districts
  • Authorities operating alongside cities
  • Infrastructure crossing every boundary

The system worked locally, but failed regionally.

No entity could plan coherently across jurisdictions. Funding decisions conflicted. Infrastructure systems overlapped. Federal requirements could not be met cleanly. At this point, Texas made another defining choice.

It did not consolidate governments.
It pulled planning and coordination back upward, closer to the state.


Councils of Governments: State-Authorized Coordination

Beginning in the 1960s, Texas authorized Councils of Governments (COGs) to address fragmentation.

Today:

  • 24 COGs cover the entire state
  • Each spans multiple counties
  • Membership includes cities, counties, ISDs, and districts

COGs:

  • Have no taxing authority
  • Have no regulatory power
  • Have no police power

They exist to coordinate, not to govern—to reconnect what delegation had scattered. Their weakness is intentional. They sit conceptually just beneath the state, not beneath local governments.


MPOs: Transportation Planning Pulled Upward

Transportation forced an even clearer pull-back.

Texas has 25 Metropolitan Planning Organizations, designated by the state to comply with federal law. MPOs plan, prioritize, and allocate federal transportation funding. They do not build roads, levy taxes, or override governments.

MPOs act as planning membranes between federal mandates and Texas’s fragmented local structure.


Water: Where Texas Explicitly Rejected Fragmentation

Water planning most clearly demonstrates the limits of local delegation.

Texas spans 15 major river basins, with annual rainfall ranging from under 10 inches in the west to over 50 inches in the east. Water ignores counties, cities, ISDs, and districts entirely.

Texas responded by creating:

  • Approximately 23 river authorities, organized by watershed
  • 16 Regional Water Planning Areas, overseen by the Texas Water Development Board
  • A unified State Water Plan, adopted by the Legislature

Regional Water Planning Groups govern planning, not operations. Funding eligibility flows from compliance. This is state-directed regional planning with local execution.

Texas also created 95+ Groundwater Conservation Districts, organized by aquifer rather than politics—another instance of function overriding geography.


Public Health and Other Quiet Pull-Backs

Public health produced the same result. Disease ignores jurisdictional lines. Texas authorized county, city-county, and multi-county health districts to exercise delegated state police powers regionally.

The same pattern appears elsewhere:

  • Emergency management regions
  • Workforce development boards
  • Judicial administrative regions
  • 20 Education Service Centers
  • Air-quality nonattainment regions

Each represents the same logic:

  1. Delegation fragments
  2. Fragmentation impairs system performance
  3. The state restores coordination without transferring sovereignty

Final Synthesis

Texas local government did not evolve haphazardly. It followed a consistent philosophy:

  • Preserve sovereignty at the state level
  • Delegate functions narrowly
  • Avoid consolidation
  • Specialize relentlessly
  • Pull planning back upward when fragmentation becomes unmanageable

What appears complex or chaotic is actually layered intent.

Services are delegated downward.
Planning is pulled back upward.
Sovereignty never moves.

That tension—between delegation and coordination—is not a flaw in Texas government.
It is its defining structural feature.


Sydney Australia: An Updated Case Study on Two Previous Essays regarding a Serious Topic

A collaboration between Lewis McLain & AI

Public tragedies have a way of collapsing time. Old debates are reopened as if they were never had. Long-standing policies are treated as provisional. And political reflexes reassert themselves with a familiar urgency: something must be done, and whatever is done must be fast, visible, and legislative.

A recent Reuters report describing a mass shooting at a beachside gathering in Australia illustrates this pattern with uncomfortable clarity. The event itself was horrifying. The response was predictable. Within hours, political leaders were discussing emergency parliamentary sessions, tightening gun licensing laws, and revisiting a firearm regime that has been in place for nearly three decades.

What makes this episode especially instructive is not that it occurred in Australia, but that it occurred despite Australia’s reputation for having among the strictest gun control laws in the world. The country’s post-1996 framework—created in the wake of the Port Arthur massacre—has long been cited internationally as a model of decisive legislative action. Yet here, after decades of regulation, registration, licensing, and oversight, the instinctive answer remains the same: more law.

This essay treats the Australian response not as an anomaly, but as a continuation—and confirmation—of two arguments I have made previously: one concerning mass shootings as a systems failure rather than a purely legal failure, and another concerning what I have called “one-page laws”—the belief that complex social problems can be solved by concise statutes and urgent press conferences.


The Reuters Story, Paraphrased

According to Reuters, a deadly shooting at a public gathering in Bondi shocked Australians and immediately raised questions about whether the country’s long-standing firearms regime remains adequate. One of the suspects reportedly held a legal gun license and was authorized to own multiple firearms. In response, state and federal officials suggested that parliament might be recalled to consider reforms, including changes to license duration, suitability assessments, and firearm ownership limits.

The article notes that while Australia’s gun laws dramatically reduced firearm deaths after 1996, the number of legally owned guns has since risen to levels exceeding those prior to the reforms. Advocates argue that this growth, combined with modern risks, requires updated legislation. Political leaders signaled openness to acting quickly.

What the article does not do—and what most post-tragedy coverage does not do—is explain precisely how additional laws would have prevented this specific act, or how such laws would be meaningfully enforced without expanding surveillance, discretion, or intrusion into everyday life.

That omission is not accidental. It reflects a deeper habit in public governance.


The First Essay Revisited: Mass Shootings as Systems Failures

In my earlier essay on mass shootings, I argued that these events are rarely the result of a single legal gap. Instead, they emerge from systemic breakdowns: failures of detection, communication, intervention, and follow-through. Warning signs often exist. Signals are missed, dismissed, or siloed. Institutions act sequentially rather than collectively.

The presence or absence of one additional statute does little to alter those dynamics.

The Australian case reinforces this point. The suspect was not operating in a legal vacuum. The system already required licensing, registration, and approval. The breakdown did not occur because the law was silent; it occurred because law is only one input into a much larger human system.

When tragedy strikes, however, it is far easier to amend a statute than to admit that prevention depends on imperfect human judgment, social cohesion, mental health systems, community reporting, and inter-agency coordination. Laws are tangible. Systems are messy.


The Second Essay Revisited: The Illusion of One-Page Laws

My essay on one-page laws addressed a related but broader problem: the temptation to treat legislation as a substitute for governance.

One-page laws share several characteristics:

  • They are easy to describe.
  • They signal moral seriousness.
  • They create the appearance of action.
  • They externalize complexity.

The harder questions—Who enforces this? How often? With what discretion? At what cost? With what error rate?—are deferred or ignored.

The Australian response fits this pattern precisely. Proposals to shorten license durations or tighten suitability standards sound decisive, but they conceal the real burden: reviewing thousands of existing licenses, detecting future risk in people who have not yet exhibited it, and doing so without violating basic principles of fairness or due process.

The law can authorize action. It cannot supply foresight.


Where the Two Essays Converge

Taken together, these two arguments point to a shared conclusion: legislation is often mistaken for resolution.

Mass violence is not primarily a legislative failure; it is a detection and intervention failure. One-page laws feel comforting because they compress complexity into moral clarity. But compression is not the same as control.

Australia’s experience underscores a difficult truth: once a society has implemented baseline restrictions, further legislative tightening produces diminishing returns. The remaining risk lies not in legal gaps, but in human unpredictability. Eliminating that last fraction of risk would require levels of monitoring and preemption that most free societies rightly reject.

This is the trade-off no emergency session of parliament wants to articulate.


Why the Reflex Persists

The rush to legislate after tragedy is not irrational—it is political. Laws are visible acts of leadership. They reassure the public that order is being restored. Admitting that not every horror can be prevented without dismantling civil society is a harder message to deliver.

But honesty matters.

Governance is not the art of passing laws; it is the discipline of building systems that function under stress. When tragedy is followed immediately by legislative theater, it risks substituting symbolism for substance and urgency for effectiveness.


Conclusion

The Bondi shooting is not evidence that Australia’s gun laws have failed in some absolute sense. Nor is it proof that further legislation will succeed. What it is is a case study—one that reinforces two prior conclusions:

First, that mass violence persists even in highly regulated environments because it arises from human systems, not statutory voids.

Second, that one-page laws offer emotional relief but rarely operational solutions.

Serious problems deserve serious thinking. Not every response can be reduced to a bill number and a headline. And not every tragedy has a legislative cure.

The real challenge is resisting the comforting illusion that lawmaking alone is governance—and doing the slower, quieter, less visible work of strengthening the systems that stand between instability and catastrophe.

Population as the Primary and Predictable Driver of Local Government Forecasting

A collaboration between Lewis McLain & AI

A technical framework for staffing, facilities, and cost projection

Abstract

In local government forecasting, population is the dominant driver of service demand, staffing requirements, facility needs, and operating costs. While no municipal system can be forecast with perfect precision, population-based models—when properly structured—produce estimates that are sufficiently accurate for planning, budgeting, and capital decision-making. Crucially, population growth in cities is not a sudden or unknowable event.

Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, population arrival is observable months or years in advance. This paper presents population not merely as a driver, but as a leading indicator, and demonstrates how cities can convert development approvals into staged population forecasts that support rational staffing, facility sizing, capital investment, and operating cost projections.


1. Introduction: Why population sits at the center

Local governments exist to provide services to people. Police protection, fire response, streets, parks, water, sanitation, administration, and regulatory oversight are all mechanisms for supporting a resident population and the activity it generates. While policy choices and service standards influence how services are delivered, the volume of demand originates with population.

Practitioners often summarize this reality informally:

“Tell me the population, and I can tell you roughly how many police officers you need.
If I know the staff, I can estimate the size of the building.
If I know the size, I can estimate the construction cost.
If I know the size, I can estimate the electricity bill.”

This paper formalizes that intuition into a defensible forecasting framework and addresses a critical objection: population is often treated as uncertain or unknowable. In practice, population growth in cities is neither sudden nor mysterious—it is permitted into existence through public processes that unfold over years.


2. Population as a base driver, not a single-variable shortcut

Population does not explain every budget line, but it explains most recurring demand when paired with a small number of modifiers.

At its core, many municipal services follow this structure:

Total Demand=α+β⋅Population

Where:

  • α (fixed minimum) represents baseline capacity required regardless of size (minimum staffing, governance, 24/7 coverage).
  • β (variable component) represents incremental demand generated by each additional resident.

This structure explains why:

  • Small cities appear “overstaffed” per capita (fixed minimum dominates).
  • Mid-sized and large cities stabilize into predictable staffing ratios.
  • Growth pressures emerge when population increases faster than capacity adjustments.

Population therefore functions as the load variable of local government, analogous to demand in utility planning.


3. Why population reliably predicts service demand

3.1 People generate transactions

Residents generate:

  • Calls for service
  • Utility usage
  • Permits and inspections
  • Court activity
  • Recreation participation
  • Library circulation
  • Administrative transactions (HR, payroll, finance, IT)

While individual events vary, aggregate demand scales with population.

3.2 Capacity, not consumption, drives budgets

Municipal budgets fund capacity, not just usage:

  • Staff must be available before calls occur
  • Facilities must exist before staff are hired
  • Vehicles and equipment must be in place before service delivery

Capacity decisions are inherently population-driven.


4. Population growth is observable before it arrives

A defining feature of local government forecasting—often underappreciated—is that population growth is authorized through public approvals long before residents appear in census or utility data.

Population does not “arrive”; it progresses through a pipeline.


5. The development pipeline as a population forecasting timeline

5.1 Annexation: strategic intent (years out)

Annexation establishes:

  • Jurisdictional responsibility
  • Long-term service obligations
  • Future land-use authority

While annexation does not create immediate population, it signals where population will eventually be allowed.

Forecast role:

  • Long-range horizon marker
  • Infrastructure and service envelope planning
  • Typical lead time: 3–10 years

5.2 Zoning: maximum theoretical population

Zoning converts land into entitled density.

From zoning alone, cities can estimate:

  • Maximum dwelling units
  • Maximum population at buildout
  • Long-run service ceilings

Zoning defines upper bounds, even if timing is uncertain.

Forecast role:

  • Long-range capacity planning
  • Useful for master plans and utility sizing
  • Typical lead time: 3–7 years

5.3 Preliminary plat: credible development intent

Preliminary plat approval signals:

  • Developer capital commitment
  • Defined lot counts
  • Identified phasing

Population estimates become quantifiable, even if delivery timing varies.

Forecast role:

  • Medium-high certainty population
  • First stage for phased population modeling
  • Typical lead time: 1–3 years

5.4 Final plat: scheduled population

Final plat approval:

  • Legally creates lots
  • Locks in density and configuration
  • Triggers infrastructure construction
  • Impact Fees & other costs are committed

At this point, population arrival is no longer speculative.

Forecast role:

  • High-confidence population forecasting
  • Suitable for annual budget and staffing models
  • Typical lead time: 6–24 months

5.5 Infrastructure construction: timing constraints

Once streets, utilities, and drainage are built, population arrival becomes physically constrained by construction schedules.

Forecast role:

  • Narrow timing window
  • Supports staffing lead-time decisions
  • Typical lead time: 6–18 months

5.6 Water meter connections: imminent occupancy

Water meters are one of the most reliable near-term indicators:

  • Each residential meter ≈ one household
  • Installations closely precede vertical construction

Forecast role:

  • Quarterly or monthly population forecasting
  • Just-in-time operational scaling
  • Typical lead time: 1–6 months

5.7 Certificates of Occupancy: population realized

Certificates of occupancy convert permitted population into actual population.

At this point:

  • Service demand begins immediately
  • Utility consumption appears
  • Forecasts can be validated

Forecast role:

  • Confirmation and calibration
  • Not prediction

6. Population forecasting as a confidence ladder

Development StagePopulation CertaintyTiming PrecisionPlanning Use
AnnexationLowVery lowStrategic
ZoningLow–MediumLowCapacity envelopes
Preliminary PlatMediumMediumPhased planning
Final PlatHighMedium–HighBudget & staffing
Infrastructure BuiltVery HighHighOperational prep
Water MetersExtremely HighVery HighNear-term ops
COsCertainExactValidation

Population forecasting in cities is therefore graduated, not binary.


7. From population to staffing

Once population arrival is staged, staffing can be forecast using service-specific ratios and fixed minimums.

7.1 Police example (illustrative ranges)

Sworn officers per 1,000 residents commonly stabilize within broad bands depending on service level and demand, also tied to known local ratios:

  • Lower demand: ~1.2–1.8
  • Moderate demand: ~1.8–2.4
  • High demand: ~2.4–3.5+

Civilian support staff often scale as a fraction of sworn staffing.

The appropriate structure is:Officers=αpolice+βpolicePopulationOfficers = \alpha_{police} + \beta_{police} \cdot PopulationOfficers=αpolice​+βpolice​⋅Population

Where α accounts for minimum 24/7 coverage and supervision.


7.2 General government staffing

Administrative staffing scales with:

  • Population
  • Number of employees
  • Asset inventory
  • Transaction volume

A fixed core plus incremental per-capita growth captures this reality more accurately than pure ratios.


8. From staffing to facilities

Facilities are a function of:

  • Headcount
  • Service configuration
  • Security and public access needs

A practical planning method:Facility Size=FTEGross SF per FTEFacility\ Size = FTE \cdot Gross\ SF\ per\ FTEFacility Size=FTE⋅Gross SF per FTE

Typical blended civic office planning ranges usually fall within:

  • ~175–300 gross SF per employee

Specialized spaces (dispatch, evidence, fleet, courts) are layered on separately.


9. From facilities to capital and operating costs

9.1 Capital costs

Capital expansion costs are typically modeled as:Capex=Added SFCost per SF(1+Soft Costs)Capex = Added\ SF \cdot Cost\ per\ SF \cdot (1 + Soft\ Costs)Capex=Added SF⋅Cost per SF⋅(1+Soft Costs)

Where soft costs include design, permitting, contingencies, and escalation.


9.2 Operating costs

Facility operating costs scale predictably with size:

  • Electricity: kWh per SF per year
  • Maintenance: % of replacement value or $/SF
  • Custodial: $/SF
  • Lifecycle renewals

Electricity alone can be reasonably estimated as:Annual Cost=SFkWh/SF$/kWhAnnual\ Cost = SF \cdot kWh/SF \cdot \$/kWhAnnual Cost=SF⋅kWh/SF⋅$/kWh

This is rarely exact—but it is directionally reliable.


10. Key modifiers that refine population models

Population alone is powerful but incomplete. High-quality forecasts adjust for:

  • Density and land use
  • Daytime population and employment
  • Demographics
  • Service standards
  • Productivity and technology
  • Geographic scale (lane miles, acres)

These modifiers refine, but do not replace, population as the base driver.


11. Why growth surprises cities anyway

When cities claim growth was “unexpected,” the issue is rarely lack of information. More often:

  • Development signals were not integrated into finance models
  • Staffing and capital planning lagged approvals
  • Fixed minimums were ignored
  • Threshold effects (new stations, expansions) were deferred too long

Growth that appears sudden is usually forecastable growth that was not operationalized.


12. Conclusion

Population is the primary driver of local government demand, but more importantly, it is a predictable driver. Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, cities possess a multi-year advance view of population arrival.

This makes it possible to:

  • Phase staffing rationally
  • Time facilities before overload
  • Align capital investment with demand
  • Improve credibility with councils, auditors, and rating agencies

In local government, population growth is not a surprise. It is a permitted, engineered, and scheduled outcome of public decisions. A forecasting system that treats population as both a driver and a leading indicator is not speculative—it is simply paying attention to the city’s own approvals.


Appendix A

Defensibility of Population-Driven Forecasting Models

A response framework for auditors, rating agencies, and governing bodies

Purpose of this appendix

This appendix addresses a common concern raised during budget reviews, audits, bond disclosures, and council deliberations:

“Population-based forecasts seem too simplistic or speculative.”

The purpose here is not to argue that population is the only factor affecting local government costs, but to demonstrate that population-driven forecasting—when anchored to development approvals and adjusted for service standards—is methodologically sound, observable, and conservative.


A.1 Population forecasting is not speculative in local government

A frequent misconception is that population forecasts rely on demographic projections or external estimates. In practice, this model relies primarily on the city’s own legally binding approvals.

Population growth enters the forecast only after it has passed through:

  • Annexation agreements
  • Zoning entitlements
  • Preliminary and final plats
  • Infrastructure construction
  • Utility connections
  • Certificates of occupancy

These are public, documented actions, not assumptions.

Key distinction for reviewers:
This model does not ask “How fast might the city grow?”
It asks “What growth has the city already approved, and when will it become occupied?”


A.2 Population is treated as a leading indicator, not a lagging one

Traditional population measures (census counts, ACS estimates) are lagging indicators. This model explicitly avoids relying on those for near-term forecasting.

Instead, it uses development milestones as leading indicators, each with increasing certainty and narrower timing windows.

For audit and disclosure purposes:

  • Early-stage entitlements affect only long-range capacity planning
  • Staffing and capital decisions are triggered only at later, high-certainty stages
  • Near-term operating impacts are tied to utility connections and COs

This layered approach prevents premature spending while avoiding reactive under-staffing.


A.3 Fixed minimums prevent over-projection in small or slow-growth cities

A common audit concern is that per-capita models overstate staffing needs.

This model explicitly separates:

  • Fixed baseline capacity (α)
  • Incremental population-driven capacity (β)

This structure:

  • Prevents unrealistic staffing increases in early growth stages
  • Accurately reflects real-world minimum staffing requirements
  • Explains why per-capita ratios vary by city size

Auditors should note that this approach is more conservative than straight-line per-capita extrapolation.


A.4 Service standards are explicit policy inputs, not hidden assumptions

Population does not automatically dictate staffing levels. Staffing reflects policy decisions.

This model requires the city to explicitly state:

  • Response time targets
  • Service frequency goals
  • Coverage expectations
  • Hours of operation

As a result:

  • Changes in staffing can be clearly attributed to either population growth or policy change
  • Council decisions are transparently reflected in forecasts
  • The model separates “growth pressure” from “service enhancements or reductions”

This clarity improves accountability rather than obscuring it.


A.5 Facilities and capital projections follow staffing, not speculation

Another concern raised by reviewers is that population forecasts may be used to justify premature capital expansion.

This model deliberately enforces a sequencing discipline:

  1. Population approvals observed
  2. Staffing thresholds reached
  3. Facility capacity constraints identified
  4. Capital expansion triggered

Facilities are not expanded because population might grow, but because staffing—already justified by approved growth—can no longer be accommodated.

This mirrors best practices in asset management and avoids front-loading debt.


A.6 Operating cost estimates use industry-standard unit costs

Electricity, maintenance, custodial, and lifecycle costs are estimated using:

  • Per-square-foot benchmarks
  • Historical city utility data where available
  • Conservative unit assumptions

These are not novel or experimental methods. They are the same unit-cost techniques commonly used in:

  • CIP planning
  • Facility condition assessments
  • Energy benchmarking
  • Budget impact statements

Auditors should view these estimates as planning magnitudes, not precise bills—and that distinction is explicitly stated in the model documentation.


A.7 The model is testable and falsifiable

A major strength of this approach is that it can be validated against actual outcomes.

As certificates of occupancy are issued:

  • Actual population arrival can be compared to forecasts
  • Staffing changes can be reconciled
  • Utility consumption can be measured

This allows:

  • Annual recalibration
  • Error tracking
  • Continuous improvement

Models that can be tested and corrected are inherently more defensible than opaque judgment-based forecasts.


A.8 Why this approach aligns with rating-agency expectations

Bond rating agencies consistently emphasize:

  • Predictability
  • Governance discipline
  • Forward planning
  • Avoidance of reactive financial decisions

This framework demonstrates:

  • Awareness of growth pressures well in advance
  • Phased responses rather than abrupt spending
  • Clear linkage between approvals, staffing, and capital
  • Conservative treatment of uncertainty

As such, population-driven forecasting anchored to development approvals should be viewed as a credit positive, not a risk.


A.9 Summary for reviewers

For audit, disclosure, and governance purposes, the following conclusions are reasonable:

  1. Population growth in cities is observable years in advance through public approvals.
  2. Using approved development as a population driver is evidence-based, not speculative.
  3. Fixed minimums and service-level inputs prevent mechanical over-projection.
  4. Staffing precedes facilities; facilities precede capital.
  5. Operating costs scale predictably with assets and space.
  6. The model is transparent, testable, and adjustable.

Therefore:
A population-driven forecasting model of this type represents a prudent, defensible, and professionally reasonable approach to long-range municipal planning.


Appendix B

Consequences of Failing to Anticipate Population Growth

A diagnostic review of reactive municipal planning

Purpose of this appendix

This appendix describes common failure patterns observed in cities that do not systematically link development approvals to population, staffing, and facility planning. These outcomes are not the result of negligence or bad intent; they typically arise from fragmented information, short planning horizons, or the absence of an integrated forecasting framework.

The patterns described below are widely recognized in municipal practice and are offered to illustrate the practical risks of reactive planning.


B.1 “Surprise growth” that was not actually a surprise

A frequent narrative in reactive cities is that growth “arrived suddenly.” In most cases, the growth was visible years earlier through zoning approvals, plats, or utility extensions but was not translated into staffing or capital plans.

Common indicators:

  • Approved subdivisions not reflected in operating forecasts
  • Development tracked only by planning staff, not finance or operations
  • Population discussed only after occupancy

Consequences:

  • Budget shocks
  • Emergency staffing requests
  • Loss of credibility with governing bodies

B.2 Knee-jerk staffing reactions

When growth impacts become unavoidable, reactive cities often respond through hurried staffing actions.

Typical symptoms:

  • Mid-year supplemental staffing requests
  • Heavy reliance on overtime
  • Accelerated hiring without workforce planning
  • Training pipelines overwhelmed

Consequences:

  • Elevated labor costs
  • Increased burnout and turnover
  • Declining service quality during growth periods
  • Inefficient long-term staffing structures

B.3 Under-sizing followed by over-correction

Without forward planning, cities often alternate between two extremes:

  1. Under-sizing due to conservative or delayed response
  2. Over-sizing in reaction to service breakdowns

Examples:

  • Facilities built too small “to be safe”
  • Rapid expansions shortly after completion
  • Swing from staffing shortages to excess capacity

Consequences:

  • Higher lifecycle costs
  • Poor space utilization
  • Perception of waste or mismanagement

B.4 Obsolete facilities at the moment of completion

Facilities planned without reference to future population often open already constrained.

Common causes:

  • Planning based on current headcount only
  • Ignoring entitled but unoccupied development
  • Failure to include expansion capability

Consequences:

  • Expensive retrofits
  • Disrupted operations during expansion
  • Shortened facility useful life

This is one of the most costly errors because capital investments are long-lived and difficult to correct.


B.5 Deferred capital followed by crisis-driven spending

Reactive cities often delay capital investment until systems fail visibly.

Typical patterns:

  • Fire stations added only after response times degrade
  • Police facilities expanded only after overcrowding
  • Utilities upgraded only after service complaints

Consequences:

  • Emergency procurement
  • Higher construction costs
  • Increased debt stress
  • Lost opportunity for phased financing

B.6 Misalignment between departments

When population intelligence is not shared across departments:

  • Planning knows what is coming
  • Finance budgets based on current year
  • Operations discover impacts last

Consequences:

  • Conflicting narratives to council
  • Fragmented decision-making
  • Reduced trust between departments

Population-driven forecasting provides a common factual baseline.


B.7 Overreliance on lagging indicators

Reactive cities often rely heavily on:

  • Census updates
  • Utility consumption after occupancy
  • Service call increases

These indicators confirm growth after it has already strained capacity.

Consequences:

  • Persistent lag between demand and response
  • Structural understaffing
  • Continual “catch-up” budgeting

B.8 Political whiplash and credibility erosion

Unanticipated growth pressures often force councils into repeated difficult votes:

  • Emergency funding requests
  • Mid-year budget amendments
  • Rapid debt authorizations

Over time, this leads to:

  • Voter skepticism
  • Council fatigue
  • Reduced tolerance for legitimate future investments

Planning failures become governance failures.


B.9 Inefficient use of taxpayer dollars

Ironically, reactive planning often costs more, not less.

Cost drivers include:

  • Overtime premiums
  • Compressed construction schedules
  • Retrofit and rework costs
  • Higher borrowing costs due to rushed timing

Proactive planning spreads costs over time and reduces risk premiums.


B.10 Organizational stress and morale impacts

Staff experience growth pressures first.

Observed impacts:

  • Chronic overtime
  • Inadequate workspace
  • Equipment shortages
  • Frustration with leadership responsiveness

Over time, this contributes to:

  • Higher turnover
  • Loss of institutional knowledge
  • Reduced service consistency

B.11 Why these failures persist

These patterns are not caused by incompetence. They persist because:

  • Growth information is siloed
  • Forecasting is viewed as speculative
  • Political incentives favor short-term restraint
  • Capital planning horizons are too short

Absent a formal framework, cities default to reaction.


B.12 Summary for governing bodies

Cities that do not integrate development approvals into population-driven forecasting commonly experience:

  1. Perceived “surprise” growth
  2. Emergency staffing responses
  3. Repeated under- and over-sizing
  4. Facilities that age prematurely
  5. Higher long-term costs
  6. Organizational strain
  7. Reduced public confidence

None of these outcomes are inevitable. They are symptoms of not using information the city already has.


B.13 Closing observation

The contrast between proactive and reactive cities is not one of optimism versus pessimism. It is a difference between:

  • Anticipation versus reaction
  • Sequencing versus scrambling
  • Planning versus explaining after the fact

Population-driven forecasting does not eliminate uncertainty. It replaces surprise with preparation.


Appendix C

Population Readiness & Forecasting Discipline Checklist

A self-assessment for proactive versus reactive cities

Purpose:
This checklist allows a city to evaluate whether it is systematically anticipating population growth—or discovering it after impacts occur. It is designed for use by city management teams, finance directors, auditors, and governing bodies.

How to use:
For each item, mark:

  • Yes / In place
  • ⚠️ Partially / Informal
  • No / Not done

Patterns matter more than individual answers.


Section 1 — Visibility of Future Population

C-1 Do we maintain a consolidated list of annexed, zoned, and entitled land with estimated buildout population?

C-2 Are preliminary and final plats tracked in a format usable by finance and operations (not just planning)?

C-3 Do we estimate population by development phase, not just at full buildout?

C-4 Is there a documented method for converting lots or units into population (household size assumptions reviewed periodically)?

C-5 Do we distinguish between long-range potential growth and near-term probable growth?

Red flag:
Population is discussed primarily in narrative terms (“fast growth,” “slowing growth”) rather than quantified and staged.


Section 2 — Timing and Lead Indicators

C-6 Do we identify which development milestone triggers planning action (e.g., preliminary plat vs final plat)?

C-7 Are infrastructure completion schedules incorporated into population timing assumptions?

C-8 Are water meter installations or equivalent utility connections tracked and forecasted?

C-9 Do we use certificates of occupancy to validate and recalibrate population forecasts annually?

C-10 Is population forecasting treated as a rolling forecast, not a once-per-year estimate?

Red flag:
Population is updated only when census or ACS data is released.


Section 3 — Staffing Linkage

C-11 Does each major department have an identified population or workload driver?

C-12 Are fixed minimum staffing levels explicitly separated from growth-driven staffing?

C-13 Are staffing increases tied to forecasted population arrival, not service breakdowns?

C-14 Do hiring plans account for lead times (recruitment, academies, training)?

C-15 Can we explain recent staffing increases as either:

  • population growth, or
  • explicit policy/service-level changes?

Red flag:
Staffing requests frequently cite “we are behind” without reference to forecasted growth.


Section 4 — Facilities and Capital Planning

C-16 Are facility size requirements derived from staffing projections, not current headcount?

C-17 Do capital plans include expansion thresholds (e.g., headcount or service load triggers)?

C-18 Are new facilities designed with future expansion capability?

C-19 Are entitled-but-unoccupied developments considered when evaluating future facility adequacy?

C-20 Do we avoid building facilities that are at or near capacity on opening day?

Red flag:
Facilities require major expansion within a few years of completion.


Section 5 — Operating Cost Awareness

C-21 Are operating costs (utilities, maintenance, custodial) modeled as a function of facility size and assets?

C-22 Are utility cost impacts of expansion estimated before facilities are approved?

C-23 Do we understand how population growth affects indirect departments (HR, IT, finance)?

C-24 Are lifecycle replacement costs considered when adding capacity?

Red flag:
Operating cost increases appear as “unavoidable surprises” after facilities open.


Section 6 — Cross-Department Integration

C-25 Do planning, finance, and operations use the same population assumptions?

C-26 Is growth discussed in joint meetings, not only within planning?

C-27 Does finance receive regular updates on development pipeline status?

C-28 Are growth assumptions documented and shared, not implicit or informal?

Red flag:
Different departments give different growth narratives to council.


Section 7 — Governance and Transparency

C-29 Can we clearly explain to council why staffing or capital is needed before service failure occurs?

C-30 Are population-driven assumptions documented in budget books or CIP narratives?

C-31 Do we distinguish between:

  • growth-driven needs, and
  • discretionary service enhancements?

C-32 Can auditors or rating agencies trace growth-related decisions back to documented approvals?

Red flag:
Growth explanations rely on urgency rather than evidence.


Section 8 — Validation and Learning

C-33 Do we compare forecasted population arrival to actual COs annually?

C-34 Are forecasting errors analyzed and corrected rather than ignored?

C-35 Do we adjust household size, absorption rates, or timing assumptions over time?

Red flag:
Forecasts remain unchanged year after year despite clear deviations.


Scoring Interpretation (Optional)

  • Mostly ✅ → Proactive, anticipatory city
  • Mix of ✅ and ⚠️ → Partially planned, risk of reactive behavior
  • Many ❌ → Reactive city; growth will feel like a surprise

A city does not need perfect scores. The presence of structure, documentation, and sequencing is what matters.


Closing Note for Leadership

If a city can answer most of these questions affirmatively, it is not guessing about growth—it is managing it. If many answers are negative, the city is likely reacting to outcomes it had the power to anticipate.

Population growth does not cause planning problems.
Ignoring known growth signals does.


Appendix D

Population-Driven Planning Maturity Model

A framework for assessing and improving municipal forecasting discipline

Purpose of this appendix

This maturity model describes how cities evolve in their ability to anticipate population growth and translate it into staffing, facility, and financial planning. It recognizes that most cities are not “good” or “bad” planners; they are simply at different stages of organizational maturity.

Each level builds logically on the prior one. Advancement does not require perfection—only structure, integration, and discipline.


Level 1 — Reactive City

“We didn’t see this coming.”

Characteristics

  • Population discussed only after impacts are felt
  • Reliance on census or anecdotal indicators
  • Growth described qualitatively (“exploding,” “slowing”)
  • Staffing added only after service failure
  • Capital projects triggered by visible overcrowding
  • Frequent mid-year budget amendments

Typical behaviors

  • Emergency staffing requests
  • Heavy overtime usage
  • Facilities opened already constrained
  • Surprise operating cost increases

Organizational mindset

Growth is treated as external and unpredictable.

Risks

  • Highest long-term cost
  • Lowest credibility with councils and rating agencies
  • Chronic organizational stress

Level 2 — Aware but Unintegrated City

“Planning knows growth is coming, but others don’t act on it.”

Characteristics

  • Development pipeline tracked by planning
  • Finance and operations not fully engaged
  • Growth acknowledged but not quantified in budgets
  • Capital planning still reactive
  • Limited documentation of assumptions

Typical behaviors

  • Late staffing responses despite known development
  • Facilities planned using current headcount
  • Disconnect between planning reports and budget narratives

Organizational mindset

Growth is known, but not operationalized.

Risks

  • Continued surprises
  • Internal frustration
  • Mixed messages to council

Level 3 — Structured Forecasting City

“We model growth, but execution lags.”

Characteristics

  • Population forecasts tied to development approvals
  • Preliminary staffing models exist
  • Fixed minimums recognized
  • Capital needs identified in advance
  • Forecasts updated annually

Typical behaviors

  • Better budget explanations
  • Improved CIP alignment
  • Still some late responses due to execution gaps

Organizational mindset

Growth is forecastable, but timing discipline is still developing.

Strengths

  • Credible analysis
  • Reduced emergencies
  • Clearer governance conversations

Level 4 — Integrated Planning City

“Approvals, staffing, and capital move together.”

Characteristics

  • Development pipeline drives population timing
  • Staffing plans phased to population arrival
  • Facility sizing based on projected headcount
  • Operating costs modeled from assets
  • Cross-department coordination is routine

Typical behaviors

  • Hiring planned ahead of demand
  • Facilities open with expansion capacity
  • Capital timed to avoid crisis spending
  • Clear audit trail from approvals to costs

Organizational mindset

Growth is managed, not reacted to.

Benefits

  • Stable service delivery during growth
  • Higher workforce morale
  • Strong credibility with governing bodies

Level 5 — Adaptive, Data-Driven City

“We learn, recalibrate, and optimize continuously.”

Characteristics

  • Rolling population forecasts
  • Development milestones tracked in near-real time
  • Annual validation against COs and utility data
  • Forecast errors analyzed and corrected
  • Scenario modeling for alternative growth paths

Typical behaviors

  • Minimal surprises
  • High confidence in long-range plans
  • Early identification of inflection points
  • Proactive communication with councils and investors

Organizational mindset

Growth is a controllable system, not a threat.

Benefits

  • Lowest lifecycle cost
  • Highest service reliability
  • Institutional resilience

Summary Table

LevelDescriptionCore Risk
1ReactiveCrisis-driven decisions
2Aware, unintegratedLate responses
3StructuredExecution lag
4IntegratedFew surprises
5AdaptiveMinimal risk

Key Insight

Most cities are not failing—they are stuck between Levels 2 and 3. The largest gains come not from sophisticated analytics, but from integration and timing discipline.

Progression does not require:

  • Perfect forecasts
  • Advanced software
  • Large consulting engagements

It requires:

  • Using approvals the city already grants
  • Sharing population assumptions across departments
  • Sequencing decisions intentionally

Closing Observation

Cities do not choose whether they grow. They choose whether growth feels like a surprise or a scheduled event.

This maturity model makes that choice visible.

The Supreme Court and Texas Redistricting: Arguments, Standards, and the Court’s Conclusions

A collaboration between Lewis McLain & AI

For more than fifty years, Texas has been at the center of American redistricting law. Few states have produced as many major Supreme Court decisions shaping the meaning of the Voting Rights Act, the boundaries of racial gerrymandering doctrine, and—perhaps most significantly—the Court’s modern unwillingness to police partisan gerrymandering.

Two cases define the modern era for Texas: LULAC v. Perry (2006) and Abbott v. Perez (2018). Together, they reveal how the Court analyzes racial vote dilution, when partisan motives are permissible, how intent is inferred or rejected, and what evidentiary burdens challengers must meet.

At the heart of the Court’s reasoning is a recurring tension:

  • the Constitution forbids racial discrimination in redistricting,
  • the Voting Rights Act prohibits plans that diminish minority voting strength,
  • but the Court has repeatedly held that partisan advantage, even aggressive partisan advantage, is not generally unconstitutional.

Texas’s maps have allowed the Court to articulate, refine, and—many argue—narrow these doctrines.


I. LULAC v. Perry (2006): Partisan Motives Allowed, But Minority Vote Dilution Not

Background

In 2003, after winning unified control of state government, Texas Republicans enacted a mid-decade congressional redistricting plan replacing the court-drawn map used in 2002. It was an openly partisan effort to convert a congressional delegation that had favored Democrats into a Republican-leaning one.

Challengers argued:

  1. The mid-decade redistricting itself was unconstitutional.
  2. The legislature’s partisan intent violated the Equal Protection Clause.
  3. The plan diluted Latino voting strength in violation of Section 2 of the Voting Rights Act, particularly in old District 23.
  4. Several districts were racial gerrymanders, subordinating race to politics.

Arguments Before the Court

  • Challengers:
    • Texas had engaged in unprecedented partisan manipulation lacking a legitimate state purpose.
    • The dismantling of Latino opportunity districts—especially District 23—reduced the community’s ability to elect its preferred candidate.
    • Race was used as a tool to achieve partisan ends, in violation of Shaw v. Reno-line racial gerrymandering rules.
  • Texas:
    • Nothing in the Constitution forbids mid-decade redistricting.
    • Political gerrymandering, even when aggressive and obvious, was allowed under Davis v. Bandemer (1986).
    • Latino voters in District 23 were not “cohesive” enough to qualify for Section 2 protection.
    • District configurations reflected permissible political considerations.

The Court’s Decision

The Court’s ruling was a fractured opinion, but several clear conclusions emerged.

1. Mid-Decade Redistricting Is Constitutional

The Court held that states are not restricted to once-a-decade redistricting. Nothing in the Constitution or federal statute bars legislatures from replacing a map mid-cycle.
This effectively legitimized Texas’s overtly partisan decision to redraw the map simply because political control had shifted.

2. Partisan Gerrymandering Claims Remain Non-Justiciable (or Nearly So)

The Court again declined to articulate a manageable standard for judging partisan gerrymandering.
Justice Kennedy, writing for the controlling plurality, expressed concern about severe partisan abuses but concluded that no judicially administrable rule existed.

Key takeaway:
Texas’s partisan motivation, even if blatant, was not itself unconstitutional.

3. Section 2 Violation in District 23: Latino Voting Strength Was Illegally Diluted

This was the major substantive ruling.

The Court found that Texas dismantled an existing Latino opportunity district (CD-23) precisely because Latino voters were on the verge of electing their preferred candidate.
The legislature:

  • removed tens of thousands of cohesive Latino voters from the district,
  • replaced them with low-turnout Latino populations less likely to vote against the incumbent,
  • and justified the move under the guise of creating a new Latino-majority district elsewhere.

This manipulation, the Court held, denied Latino voters an equal opportunity to elect their candidate of choice, violating Section 2.

4. Racial Gerrymandering Claims Mostly Fail

The Court rejected most Shaw-type racial gerrymandering claims because plaintiffs failed to prove that race, rather than politics, predominated.
This reflects a theme that becomes even stronger in later cases:
when race and politics correlate—as they often do in Texas—challengers must provide powerful evidence that race, not party, drove the lines.


II. Abbott v. Perez (2018): A High Bar for Proving Discriminatory Intent

Background

After the 2010 census, Texas enacted new maps. A federal district court found that several districts were intentionally discriminatory and ordered Texas to adopt interim maps. In 2013, Texas then enacted maps that were largely identical to the court’s own interim maps.

Challengers argued that:

  1. The original 2011 maps were passed with discriminatory intent.
  2. The 2013 maps, though based on the court’s design, continued to embody the taint of 2011.
  3. Multiple districts across Texas diluted minority voting strength or were racial gerrymanders.

Texas argued that:

  • The 2013 maps were valid because they were largely adopted from a court-approved version.
  • Any discriminatory intent from 2011 could not be imputed to the 2013 legislature.
  • Plaintiffs bore the burden of proving intentional discrimination district by district.

The Court’s Decision

In a 5–4 ruling, the Supreme Court reversed almost all findings of discriminatory intent against Texas.

1. Burden of Proof Is on Challengers, Not the State

The Court rejected the lower court’s presumption that Texas acted with discriminatory intent in 2013 merely because the 2011 legislature had been found to do so.

Key Holding:
A finding of discriminatory intent in a prior map does not shift the burden; challengers must prove new intent for each new plan.

This significantly tightened the evidentiary bar.

2. Presumption of Legislative Good Faith

Chief Justice Roberts emphasized a longstanding principle:

Legislatures are entitled to a presumption of good faith unless challengers provide direct and persuasive evidence otherwise.

This presumption made it much harder to prove racial discrimination unless emails, testimony, or map-drawing files showed explicit racial motives.

3. Section 2 Vote Dilution Claims Largely Rejected

Challengers failed to show that minority voters were both cohesive and systematically defeated by white bloc voting in many districts.
The Court stressed the need for:

  • clear demographic evidence,
  • consistent voting patterns,
  • and demonstration of feasible alternative districts.

4. Only One District Violated the Constitution

The Court affirmed discrimination in Texas House District 90, where the legislature had intentionally moved Latino voters to achieve a specific racial composition.

But the Court rejected violations in every other challenged district.

5. Practical Effect: Courts Must Defer Unless Evidence Is Unusually Strong

Abbott v. Perez is widely viewed as one of the strongest modern statements of judicial deference to legislatures in redistricting—even when past discrimination has been found.

Justice Sotomayor’s dissent called the majority opinion “astonishing in its blindness.”


III. What These Cases Together Mean: Why the Court Upheld Texas’s Maps

Across both LULAC (2006) and Abbott (2018), a coherent theme emerges in the Supreme Court’s reasoning:

1. Partisan Gerrymandering Is Not the Court’s Job to Police

Unless partisan advantage clearly crosses into racial targeting, the Court will not strike it down.
Texas repeatedly argued political motives, and the Court repeatedly accepted them as legitimate.

2. Racial Discrimination Must Be Proven With Specific, District-Level Evidence

  • Plaintiffs must demonstrate that race—not politics—predominated.
  • Correlation between race and partisanship is not enough.
  • Evidence must address each district individually.

3. Legislatures Receive a Strong Presumption of Good Faith

Abbott v. Perez reaffirmed that courts should not infer intent from

  • prior discrimination,
  • suspicious timing,
  • or even foreseeable racial effects.

4. Section 2 Remedies Require Cohesive Minority Voting Blocs

LULAC (2006) found a violation only because evidence clearly showed cohesive Latino voters whose electoral progress was intentionally undermined.

5. Courts Avoid Intruding into “Political Questions”

The Court has repeatedly signaled reluctance to take over the political process.
This culminated in Rucho v. Common Cause (2019), where the Court held partisan gerrymandering claims categorically non-justiciable—a rule entirely consistent with how Texas cases were decided.


Conclusion: Why Texas Keeps Winning

Texas’s redistricting cases illustrate how the Supreme Court draws a sharp—and highly consequential—line:

  • Racial discrimination is unconstitutional, but must be proven with very specific evidence.
  • Partisan manipulation, even extreme manipulation, is permissible.
  • Courts defer heavily to state legislatures unless plaintiffs can clearly show that lawmakers used race as a tool, not merely politics.

In LULAC, challengers succeeded only where the evidence of racial vote dilution was unmistakable.
In Abbott v. Perez, they failed everywhere except one district because intent was not proven with the level of granularity the Court demanded.

The result is that Texas has repeatedly prevailed in redistricting litigation—not necessarily because its maps are racially neutral, but because the Court has set an unusually high bar for proving racial motive and has washed its hands of partisan claims altogether.

What Every Student Should Learn From Civics and Government — The Education of a Citizen

A collaboration between Lewis McLain & AI (4 of 4 in a Series)

If literature teaches us how to think,
and history teaches us where we came from,
and economics teaches us how choices shape the world,

then civics and government teach us how to live together in a free society.

When I was young, civics felt like a recitation of facts — three branches, the Constitution, the Bill of Rights. But I didn’t understand the deeper purpose or the tremendous responsibility that citizenship carries. I didn’t see that democracy is not self-sustaining. It requires informed people, disciplined judgment, and a shared understanding of how government actually works.

Years later, I came to realize that civics is not a list of facts to memorize — it is the operating manual for freedom.

This essay explores the essential civic knowledge students should learn, why it matters, and why it may be the single most endangered — and most important — subject today.


1. Understanding the Constitution — The Blueprint of American Government

Every student should know what the Constitution actually does.

At a minimum, students should understand:

  • Separation of powers
  • Checks and balances
  • Federalism (power divided between federal and state governments)
  • Individual rights
  • Limited government
  • Due process and equal protection

These aren’t abstract ideas. They’re the safeguards that prevent:

  • tyranny
  • abuse of power
  • unequal treatment
  • political retaliation
  • the erosion of liberty

Students should know why the Founders feared concentrated power. They should understand the debates between Hamilton and Jefferson, the compromises that made the system possible, and the principles that still hold it together.

A civically educated student knows what the government can do, what it cannot do, and what it should never be allowed to do.


2. How Laws Are Made — And Why It’s Supposed to Be Hard

A free people should know how laws move from idea to reality:

  • committee
  • debate
  • amendments
  • compromise
  • bicameral approval
  • executive signature
  • judicial review

Students should understand why the system has friction. The Founders designed lawmaking to be deliberate, slow, and thoughtful — not impulsive. This protects the nation from sudden swings of emotion, political fads, or the passions of the moment.

When students understand the process, they also understand:

  • why gridlock happens
  • why compromise is necessary
  • why no single branch can act alone
  • why courts exist as an independent check

This is how civics grounds expectations and tempers frustration.


3. Rights and Responsibilities — The Moral Core of Citizenship

Civics is not only about rights; it is also about responsibilities.

Students should understand:

  • free speech
  • free press
  • freedom of religion
  • right to vote
  • right to assemble
  • right to due process

But they should also learn:

  • the responsibility to vote
  • the responsibility to stay informed
  • the responsibility to obey just laws
  • the responsibility to serve on juries
  • the responsibility to hold leaders accountable
  • the responsibility to treat fellow citizens with dignity

A functioning democracy depends as much on personal virtue as it does on institutional design.


4. Local Government — The Level Students Understand the Least

Ironically, the level of government that affects daily life the most is the one students know the least about.

Students should understand:

  • cities, counties, school districts
  • zoning
  • local taxes
  • police and fire services
  • transportation systems
  • water and utility infrastructure
  • public debt and bond elections
  • local boards and commissions
  • how a city manager system works
  • how budgets are created and balanced

Local government is where the real work happens:

  • roads repaired
  • streets policed
  • water delivered
  • development approved
  • transit planned
  • emergency services coordinated
  • property taxes assessed

A civically educated adult understands where decisions are made — and how to influence them.


5. How Elections Work — Beyond the Headlines and Sound Bites

Every student should understand:

  • how voter registration works
  • how primaries differ from general elections
  • how the Electoral College works
  • how districts are drawn
  • what gerrymandering is
  • how campaign finance operates
  • the difference between federal, state, and local elections

They should learn how to evaluate:

  • candidates
  • platforms
  • ballot propositions
  • constitutional amendments
  • city bond proposals
  • school board decisions

Without civic education, elections become personality contests instead of informed deliberations.


6. The Balance Between Freedom and Order

Civics teaches students that government constantly manages tensions:

  • liberty vs. security
  • freedom vs. responsibility
  • majority rule vs. minority rights
  • government power vs. individual autonomy

These are not easy questions.
There are no perfect answers.
But a well-educated citizen understands the tradeoffs.

For example:

  • How far should free speech extend?
  • What powers should police have?
  • When should the state intervene in personal choices?
  • When does regulation protect people, and when does it stifle them?

Civics teaches students how to think through these issues, not what to believe.


7. Why Civics Matters Even More in the Age of AI

Artificial intelligence has changed the public square. It has amplified the need for civic understanding.

AI magnifies misinformation.

A civically uneducated population is easy to manipulate.

AI can imitate authority.

Only an informed citizen knows how to verify sources and test claims.

AI accelerates public emotion.

Civic education slows people down — it teaches them to evaluate before reacting.

AI makes propaganda more sophisticated.

Civics teaches how institutions work, which protects against deception.

Democracy cannot survive without an educated citizenry.

AI is powerful, but it is not responsible. Humans must be.

This is why civics — real civics — is urgently needed.


Conclusion: The Education of a Self-Governing People

History shows that democracies do not fall because enemies defeat them.
They fall because citizens forget how to govern themselves.

Civics teaches:

  • how power is structured
  • how laws are made
  • how rights are protected
  • how communities are built
  • how leaders should be chosen
  • how governments should behave
  • how citizens must participate

If literature strengthens the mind,
and history strengthens judgment,
and economics strengthens decision-making,

then civics strengthens the nation itself.

A free society is not sustained by wishes or by luck.
It is sustained by people who understand the system, value the responsibilities of citizenship, and guard the principles that keep liberty alive.

That is what civics is meant to teach —
and why it must remain at the heart of a complete education.