The Day the iPhone Rewired the World

https://9to5mac.com/wp-content/uploads/sites/6/2022/01/steve-jobs-og-iphone.jpg?quality=82&strip=all&w=1600
https://cdn.osxdaily.com/wp-content/uploads/2017/01/original-iphone.jpg
https://www.macworld.com/wp-content/uploads/2025/10/original-iphone-2007-1.jpg?quality=50&strip=all

A collaboration between Lewis McLain & AI

On January 9, 2007, at Macworld in San Francisco, Steve Jobs walked onto the stage and delivered one of the most consequential product announcements in modern history. He framed it theatrically—three devices in one: an iPod, a phone, and an internet communicator. Then he paused, smiled, and revealed the trick. They were not three devices. They were one. The Apple iPhone had arrived.

What followed was not merely a successful product launch. It was a hinge moment—one that quietly reordered how humans interact with technology, with information, with each other, and even with themselves.


What Made the iPhone Event Different

The iPhone announcement mattered not because it was the first smartphone, but because it redefined what a phone was supposed to be.

At the time, the market was dominated by devices with physical keyboards, styluses, nested menus, and clunky mobile browsers. BlackBerry owned business communication. Nokia owned scale. Microsoft owned enterprise software assumptions. Apple owned none of these markets.

Yet the iPhone introduced several radical departures:

  • Multi-touch as the interface
    Fingers replaced keyboards and styluses. Pinch, swipe, and tap turned abstract computing into something instinctive and physical.
  • A real web browser
    Not a stripped-down “mobile” version of the internet, but the actual web—zoomable, readable, usable.
  • Software-first design
    The device wasn’t defined by buttons or ports but by software, animations, and user experience. Hardware existed to serve software, not the other way around.
  • A unified ecosystem vision
    The iPhone was conceived not as a gadget but as a node—connected to iTunes, Macs, carriers, and eventually an App Store that did not yet exist but was already implied.

Jobs did not spend the keynote talking about specs. He talked about experience. That choice alone signaled a philosophical shift in consumer technology.


The Immediate Shockwave

The reaction was mixed. Some praised the elegance. Others mocked the lack of a physical keyboard, the high price, and the absence of third-party apps at launch. Industry leaders dismissed it as a niche luxury device.

Those critiques aged poorly.

Within a few years, nearly every phone manufacturer had abandoned keyboards. Touchscreens became universal. Mobile operating systems replaced desktop metaphors. The skeptics were not foolish—they were anchored to the past in a moment when the ground moved.


How the iPhone Changed Everyday Life

The iPhone did not just change phones. It collapsed entire categories of human activity into a pocket-sized slab of glass.

Communication shifted from voice-first to text, image, and video-first. Navigation moved from paper maps and memory to GPS-by-default. Photography became constant and social rather than occasional and deliberate. The internet ceased to be a place you “went” and became something you carried.

Several deeper changes followed:

  • Time became fragmented
    Micro-moments—checking, scrolling, responding—filled the spaces once occupied by waiting, boredom, or reflection.
  • Attention became a resource
    Notifications, feeds, and apps competed continuously for awareness, reshaping media, advertising, and even politics.
  • Work escaped the office
    Email, documents, approvals, and meetings followed people everywhere, blurring boundaries between professional and personal life.
  • Memory outsourced itself
    Phone numbers, directions, appointments, even photographs replaced recall with retrieval.

The iPhone did not force these changes, but it made them frictionless, and friction is often the last defense of human habits.


The App Store Effect

A year later, Apple launched the App Store, and the iPhone’s impact accelerated exponentially. Developers gained a global distribution platform overnight. Entire industries emerged—ride-sharing, mobile banking, food delivery, social media influencers, mobile gaming—built on the assumption that everyone carried a powerful computer at all times.

This was not just technological leverage. It was economic leverage.

Apple positioned itself as the gatekeeper of a new digital economy, collecting a share of transactions while letting others shoulder innovation risk. Few business models in history have been so scalable with so little marginal cost.


The Financial Transformation of Apple

Before the iPhone, Apple was a successful but niche computer company. After the iPhone, it became something else entirely.

The iPhone evolved into Apple’s single largest revenue driver, often accounting for roughly half of annual revenue in its peak years. More importantly, it pulled customers into a broader ecosystem—Macs, iPads, Apple Watch, AirPods, services, subscriptions—each reinforcing the others.

Apple’s profits followed accordingly:

  • Revenue grew from tens of billions annually to hundreds of billions
  • Gross margins remained unusually high for a hardware company
  • Cash reserves swelled to levels rivaling national treasuries
  • Apple became, at times, the most valuable company in the world

The genius was not just the device. It was the integration—hardware, software, services, and brand operating as a single system. Competitors could copy features, but not the whole machine.


The Long View

January 9, 2007, now looks less like a product launch and more like a civilizational inflection point. The iPhone compressed computing into daily life so completely that it is now difficult to remember what came before.

That power has brought wonder and convenience—and distraction, dependency, and new ethical dilemmas. Tools that shape attention inevitably shape culture.

Apple did not merely sell a phone that day. It sold a future—one we are still living inside, still arguing about, and still trying to understand.

Artificial Intelligence in City Government: From Adoption to Accountability

A Practical Framework for Innovation, Oversight, and Public Trust

A collaboration between Lewis McLain & AI – A Companion to the previous blog on AI

Artificial intelligence has moved from novelty to necessity in public institutions. What began as experimental tools for drafting documents or summarizing data is now embedded in systems that influence budgeting, service delivery, enforcement prioritization, procurement screening, and public communication. Cities are discovering that AI is no longer optional—but neither is governance.

This essay unifies two truths that are often treated as competing ideas but must now be held together:

  1. AI adoption is inevitable and necessary if cities are to remain operationally effective and fiscally sustainable.
  2. AI oversight is now unavoidable wherever systems influence decisions affecting people, rights, or public trust.

These are not contradictions. They are sequential realities. Adoption without governance leads to chaos. Governance without adoption leads to irrelevance. The task for modern city leadership is to do both—intentionally.

I. The Adoption Imperative: AI as Municipal Infrastructure

Cities face structural pressures that are not temporary: constrained budgets, difficulty recruiting and retaining staff, growing service demands, and rising analytical complexity. AI tools offer a way to expand institutional capacity without expanding payrolls at the same rate.

Common municipal uses already include:

  • Drafting ordinances, reports, and correspondence
  • Summarizing public input and staff analysis
  • Forecasting revenues, expenditures, and service demand
  • Supporting customer service through chat or triage tools
  • Enhancing internal research and analytics

In this sense, AI is not a gadget. It is infrastructure, comparable to ERP systems, GIS, or financial modeling platforms. Cities that delay adoption will find themselves less capable, less competitive, and more expensive to operate.

Adoption, however, is not merely technical. AI reshapes workflows, compresses tasks, and changes how work is performed. Over time, this may alter staffing needs. The question is not whether AI will change city operations—it already is. The question is whether those changes are guided or accidental.

II. The Oversight Imperative: Why Governance Is Now Required

As AI systems move beyond internal productivity and begin to influence decisions—directly or indirectly—oversight becomes essential.

AI systems are now used, or embedded through vendors, in areas such as:

  • Permit or inspection prioritization
  • Eligibility screening for programs or services
  • Vendor risk scoring and procurement screening
  • Enforcement triage
  • Public safety analytics

When AI recommendations shape outcomes, even if a human signs off, accountability cannot be vague. Errors at scale, opaque logic, and undocumented assumptions create legal exposure and erode public trust faster than traditional human error.

Oversight is required because:

  • Scale magnifies mistakes: a single flaw can affect thousands before detection.
  • Opacity undermines legitimacy: residents are less forgiving of decisions they cannot understand.
  • Legal scrutiny is increasing: courts and legislatures are paying closer attention to algorithmic decision-making.

Oversight is not about banning AI. It is about ensuring AI is used responsibly, transparently, and under human control.

III. Bridging Adoption and Oversight: A Two-Speed Framework

The tension between “move fast” and “govern carefully” dissolves once AI uses are separated by risk.

Low-Risk, Internal AI Uses

Examples include drafting, summarization, forecasting, research, and internal analytics.

Approach:
Adopt quickly, document lightly, train staff, and monitor outcomes.

Decision-Adjacent or High-Risk AI Uses

Examples include enforcement prioritization, eligibility determinations, public safety analytics, and procurement screening affecting vendors.

Approach:
Require review, documentation, transparency, and meaningful human oversight before deployment.

This two-speed framework allows cities to capture productivity benefits immediately while placing guardrails only where risk to rights, equity, or trust is real.

IV. Texas Context: Statewide Direction on AI Governance

The Texas Legislature reinforced this balanced approach through the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026. The law does not prohibit AI use. Instead, it establishes expectations for transparency, accountability, and prohibited practices—particularly for government entities.

Key elements include:

  • Disclosure when residents interact with AI systems
  • Prohibitions on social scoring by government
  • Restrictions on discriminatory AI use
  • Guardrails around biometric and surveillance applications
  • Civil penalties for unlawful or deceptive deployment
  • Creation of a statewide Artificial Intelligence Council

The message is clear: Texas expects governments to adopt AI responsibly—neither recklessly nor fearfully.

V. Implications for Cities and Transit Agencies

Cities are already using AI, often unknowingly, through vendor-provided software. Transit agencies face elevated exposure because they combine finance, enforcement, surveillance, and public safety.

The greatest risk is not AI itself, but uncontrolled AI:

  • Vendor-embedded algorithms without disclosure
  • No documented human accountability
  • No audit trail
  • No process for suspension or correction

Cities that act early reduce legal risk, preserve public trust, and maintain operational flexibility.

VI. Workforce Implications: Accurate and Defensible Language

AI will change how work is done over time. It would be inaccurate and irresponsible to claim otherwise.

At the same time, AI does not mandate immediate workforce reductions. In public institutions, workforce impacts—if they occur—are most likely to happen gradually through:

  • Attrition
  • Reassignment
  • Retraining
  • Role redesign

Final staffing decisions remain with City leadership and City Council. AI is a tool for improving capacity and sustainability, not an automatic trigger for reductions.

Conclusion: Coherent, Accountable AI

AI adoption without governance invites chaos. Governance without adoption invites stagnation. Cities that succeed will do both—moving quickly where risk is low and governing carefully where risk is high.

This is not about technology hype. It is about institutional competence in a digital age.


Appendix 1 — Texas Responsible Artificial Intelligence Governance Act (HB 149)

Legislature Online

                                                   H.B. No. 149

AN ACT

relating to regulation of the use of artificial intelligence systems in this state; providing civil penalties.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:

SECTION 1.  This Act may be cited as the Texas Responsible Artificial Intelligence Governance Act.

SECTION 2.  Section 503.001, Business & Commerce Code, is amended by amending Subsections (a) and (e) and adding Subsections (b-1) and (f) to read as follows:

(a)  In this section:

(1)  “Artificial intelligence system” has the meaning assigned by Section 551.001.

(2)  “Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.

(b-1)  For purposes of Subsection (b), an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate.

(e)  This section does not apply to:

(1)  voiceprint data retained by a financial institution or an affiliate of a financial institution, as those terms are defined by 15 U.S.C. Section 6809;

(2)  the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; or

(3)  the development or deployment of an artificial intelligence model or system for the purposes of:

(A)  preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;

(B)  preserving the integrity or security of a system; or

(C)  investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.

(f)  If a biometric identifier captured for the purpose of training an artificial intelligence system is subsequently used for a commercial purpose not described by Subsection (e), the person possessing the biometric identifier is subject to:

(1)  this section’s provisions for the possession and destruction of a biometric identifier; and

(2)  the penalties associated with a violation of this section.

SECTION 3.  Section 541.104(a), Business & Commerce Code, is amended to read as follows:

(a)  A processor shall adhere to the instructions of a controller and shall assist the controller in meeting or complying with the controller’s duties or requirements under this chapter, including:

(1)  assisting the controller in responding to consumer rights requests submitted under Section 541.051 by using appropriate technical and organizational measures, as reasonably practicable, taking into account the nature of processing and the information available to the processor;

(2)  assisting the controller with regard to complying with requirements relating to the security of processing personal data, and if applicable, the personal data collected, stored, and processed by an artificial intelligence system, as that term is defined by Section 551.001, and to the notification of a breach of security of the processor’s system under Chapter 521, taking into account the nature of processing and the information available to the processor; and

(3)  providing necessary information to enable the controller to conduct and document data protection assessments under Section 541.105.

SECTION 4.  Title 11, Business & Commerce Code, is amended by adding Subtitle D to read as follows:

SUBTITLE D.  ARTIFICIAL INTELLIGENCE PROTECTION

CHAPTER 551.  GENERAL PROVISIONS

Sec. 551.001.  DEFINITIONS.  In this subtitle:

(1)  “Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

(2)  “Consumer” means an individual who is a resident of this state acting only in an individual or household context.  The term does not include an individual acting in a commercial or employment context.

(3)  “Council” means the Texas Artificial Intelligence Council established under Chapter 554.

Sec. 551.002.  APPLICABILITY OF SUBTITLE.  This subtitle applies only to a person who:

(1)  promotes, advertises, or conducts business in this state;

(2)  produces a product or service used by residents of this state; or

(3)  develops or deploys an artificial intelligence system in this state.

Sec. 551.003.  CONSTRUCTION AND APPLICATION OF SUBTITLE.  This subtitle shall be broadly construed and applied to promote its underlying purposes, which are to:

(1)  facilitate and advance the responsible development and use of artificial intelligence systems;

(2)  protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems;

(3)  provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and

(4)  provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.

CHAPTER 552.  ARTIFICIAL INTELLIGENCE PROTECTION

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 552.001.  DEFINITIONS.  In this chapter:

(1)  “Deployer” means a person who deploys an artificial intelligence system for use in this state.

(2)  “Developer” means a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state.

(3)  “Governmental entity” means any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state.  The term does not include:

(A)  a hospital district created under the Health and Safety Code or Article IX, Texas Constitution; or

(B)  an institution of higher education, as defined by Section 61.003, Education Code, including any university system or any component institution of the system.

Sec. 552.002.  CONSTRUCTION OF CHAPTER.  This chapter may not be construed to:

(1)  impose a requirement on a person that adversely affects the rights or freedoms of any person, including the right of free speech; or

(2)  authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance.

Sec. 552.003.  LOCAL PREEMPTION.  This chapter supersedes and preempts any ordinance, resolution, rule, or other regulation adopted by a political subdivision regarding the use of artificial intelligence systems.

SUBCHAPTER B. DUTIES AND PROHIBITIONS ON USE OF ARTIFICIAL INTELLIGENCE

Sec. 552.051.  DISCLOSURE TO CONSUMERS.  (a)  In this section, “health care services” means services related to human health or to the diagnosis, prevention, or treatment of a human disease or impairment provided by an individual licensed, registered, or certified under applicable state or federal law to provide those services.

(b)  A governmental agency that makes available an artificial intelligence system intended to interact with consumers shall disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system.

(c)  A person is required to make the disclosure under Subsection (b) regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.

(d)  A disclosure under Subsection (b):

(1)  must be clear and conspicuous;

(2)  must be written in plain language; and

(3)  may not use a dark pattern, as that term is defined by Section 541.001.

(e)  A disclosure under Subsection (b) may be provided by using a hyperlink to direct a consumer to a separate Internet web page.

(f)  If an artificial intelligence system is used in relation to health care service or treatment, the provider of the service or treatment shall provide the disclosure under Subsection (b) to the recipient of the service or treatment or the recipient’s personal representative not later than the date the service or treatment is first provided, except in the case of emergency, in which case the provider shall provide the required disclosure as soon as reasonably possible.

Sec. 552.052.  MANIPULATION OF HUMAN BEHAVIOR.  A person may not develop or deploy an artificial intelligence system in a manner that intentionally aims to incite or encourage a person to:

(1)  commit physical self-harm, including suicide;

(2)  harm another person; or

(3)  engage in criminal activity.

Sec. 552.053.  SOCIAL SCORING.  A governmental entity may not use or deploy an artificial intelligence system that evaluates or classifies a natural person or group of natural persons based on social behavior or personal characteristics, whether known, inferred, or predicted, with the intent to calculate or assign a social score or similar categorical estimation or valuation of the person or group of persons that results or may result in:

(1)  detrimental or unfavorable treatment of a person or group of persons in a social context unrelated to the context in which the behavior or characteristics were observed or noted;

(2)  detrimental or unfavorable treatment of a person or group of persons that is unjustified or disproportionate to the nature or gravity of the observed or noted behavior or characteristics; or

(3)  the infringement of any right guaranteed under the United States Constitution, the Texas Constitution, or state or federal law.

Sec. 552.054.  CAPTURE OF BIOMETRIC DATA.  (a)  In this section, “biometric data” means data generated by automatic measurements of an individual’s biological characteristics.  The term includes a fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual.  The term does not include a physical or digital photograph or data generated from a physical or digital photograph, a video or audio recording or data generated from a video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations under the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. Section 1320d et seq.).

(b)  A governmental entity may not develop or deploy an artificial intelligence system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law.

(c)  A violation of Section 503.001 is a violation of this section.

Sec. 552.055.  CONSTITUTIONAL PROTECTION.  (a)  A person may not develop or deploy an artificial intelligence system with the sole intent for the artificial intelligence system to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution.

(b)  This section is remedial in purpose and may not be construed to create or expand any right guaranteed by the United States Constitution.

Sec. 552.056.  UNLAWFUL DISCRIMINATION.  (a)  In this section:

(1)  “Financial institution” has the meaning assigned by Section 201.101, Finance Code.

(2)  “Insurance entity” means:

(A)  an entity described by Section 82.002(a), Insurance Code;

(B)  a fraternal benefit society regulated under Chapter 885, Insurance Code; or

(C)  the developer of an artificial intelligence system used by an entity described by Paragraph (A) or (B).

(3)  “Protected class” means a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.

(b)  A person may not develop or deploy an artificial intelligence system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.

(c)  For purposes of this section, a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.

(d)  This section does not apply to an insurance entity for purposes of providing insurance services if the entity is subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance.

(e)  A federally insured financial institution is considered to be in compliance with this section if the institution complies with all federal and state banking laws and regulations.

Sec. 552.057.  CERTAIN SEXUALLY EXPLICIT CONTENT AND CHILD PORNOGRAPHY.  A person may not:

(1)  develop or distribute an artificial intelligence system with the sole intent of producing, assisting or aiding in producing, or distributing:

(A)  visual material in violation of Section 43.26, Penal Code; or

(B)  deep fake videos or images in violation of Section 21.165, Penal Code; or

(2)  intentionally develop or distribute an artificial intelligence system that engages in text-based conversations that simulate or describe sexual conduct, as that term is defined by Section 43.25, Penal Code, while impersonating or imitating a child younger than 18 years of age.

SUBCHAPTER C.  ENFORCEMENT

Sec. 552.101.  ENFORCEMENT AUTHORITY.  (a)  The attorney general has exclusive authority to enforce this chapter, except to the extent provided by Section 552.106.

(b)  This chapter does not provide a basis for, and is not subject to, a private right of action for a violation of this chapter or any other law.

Sec. 552.102.  INFORMATION AND COMPLAINTS.  The attorney general shall create and maintain an online mechanism on the attorney general’s Internet website through which a consumer may submit a complaint under this chapter to the attorney general.

Sec. 552.103.  INVESTIGATIVE AUTHORITY.  (a)  If the attorney general receives a complaint through the online mechanism under Section 552.102 alleging a violation of this chapter, the attorney general may issue a civil investigative demand to determine if a violation has occurred.  The attorney general shall issue demands in accordance with and under the procedures established under Section 15.10.

(b)  The attorney general may request from the person reported through the online mechanism, pursuant to a civil investigative demand issued under Subsection (a):

(1)  a high-level description of the purpose, intended use, deployment context, and associated benefits of the artificial intelligence system with which the person is affiliated;

(2)  a description of the type of data used to program or train the artificial intelligence system;

(3)  a high-level description of the categories of data processed as inputs for the artificial intelligence system;

(4)  a high-level description of the outputs produced by the artificial intelligence system;

(5)  any metrics the person uses to evaluate the performance of the artificial intelligence system;

(6)  any known limitations of the artificial intelligence system;

(7)  a high-level description of the post-deployment monitoring and user safeguards the person uses for the artificial intelligence system, including, if the person is a deployer, the oversight, use, and learning process established by the person to address issues arising from the system’s deployment; or

(8)  any other relevant documentation reasonably necessary for the attorney general to conduct an investigation under this section.

Sec. 552.104.  NOTICE OF VIOLATION; OPPORTUNITY TO CURE.  (a)  If the attorney general determines that a person has violated or is violating this chapter, the attorney general shall notify the person in writing of the determination, identifying the specific provisions of this chapter the attorney general alleges have been or are being violated.

(b)  The attorney general may not bring an action against the person:

(1)  before the 60th day after the date the attorney general provides the notice under Subsection (a); or

(2)  if, before the 60th day after the date the attorney general provides the notice under Subsection (a), the person:

(A)  cures the identified violation; and

(B)  provides the attorney general with a written statement that the person has:

(i)  cured the alleged violation;

(ii)  provided supporting documentation to show the manner in which the person cured the violation; and

(iii)  made any necessary changes to internal policies to reasonably prevent further violation of this chapter.

Sec. 552.105.  CIVIL PENALTY; INJUNCTION.  (a)  A person who violates this chapter and does not cure the violation under Section 552.104 is liable to this state for a civil penalty in an amount of:

(1)  for each violation the court determines to be curable or a breach of a statement submitted to the attorney general under Section 552.104(b)(2), not less than $10,000 and not more than $12,000;

(2)  for each violation the court determines to be uncurable, not less than $80,000 and not more than $200,000; and

(3)  for a continued violation, not less than $2,000 and not more than $40,000 for each day the violation continues.

(b)  The attorney general may bring an action in the name of this state to:

(1)  collect a civil penalty under this section;

(2)  seek injunctive relief against further violation of this chapter; and

(3)  recover attorney’s fees and reasonable court costs or other investigative expenses.

(c)  There is a rebuttable presumption that a person used reasonable care as required under this chapter.

(d)  A defendant in an action under this section may seek an expedited hearing or other process, including a request for declaratory judgment, if the person believes in good faith that the person has not violated this chapter.

(e)  A defendant in an action under this section may not be found liable if:

(1)  another person uses the artificial intelligence system affiliated with the defendant in a manner prohibited by this chapter; or

(2)  the defendant discovers a violation of this chapter through:

(A)  feedback from a developer, deployer, or other person who believes a violation has occurred;

(B)  testing, including adversarial testing or red-team testing;

(C)  following guidelines set by applicable state agencies; or

(D)  if the defendant substantially complies with the most recent version of the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” published by the National Institute of Standards and Technology or another nationally or internationally recognized risk management framework for artificial intelligence systems, an internal review process.

(f)  The attorney general may not bring an action to collect a civil penalty under this section against a person for an artificial intelligence system that has not been deployed.

Sec. 552.106.  ENFORCEMENT ACTIONS BY STATE AGENCIES.  (a)  A state agency may impose sanctions against a person licensed, registered, or certified by that agency for a violation of Subchapter B if:

(1)  the person has been found in violation of this chapter under Section 552.105; and

(2)  the attorney general has recommended additional enforcement by the applicable agency.

(b)  Sanctions under this section may include:

(1)  suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and

(2)  a monetary penalty not to exceed $100,000.

CHAPTER 553.  ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 553.001.  DEFINITIONS.  In this chapter:

(1)  “Applicable agency” means a department of this state established by law to regulate certain types of business activity in this state and the people engaging in that business, including the issuance of licenses and registrations, that the department determines would regulate a program participant if the person were not operating under this chapter.

(2)  “Department” means the Texas Department of Information Resources.

(3)  “Program” means the regulatory sandbox program established under this chapter that allows a person, without being licensed or registered under the laws of this state, to test an artificial intelligence system for a limited time and on a limited basis.

(4)  “Program participant” means a person whose application to participate in the program is approved and who may test an artificial intelligence system under this chapter.

SUBCHAPTER B.  SANDBOX PROGRAM FRAMEWORK

Sec. 553.051.  ESTABLISHMENT OF SANDBOX PROGRAM.  (a)  The department, in consultation with the council, shall create a regulatory sandbox program that enables a person to obtain legal protection and limited access to the market in this state to test innovative artificial intelligence systems without obtaining a license, registration, or other regulatory authorization.

(b)  The program is designed to:

(1)  promote the safe and innovative use of artificial intelligence systems across various sectors including healthcare, finance, education, and public services;

(2)  encourage responsible deployment of artificial intelligence systems while balancing the need for consumer protection, privacy, and public safety;

(3)  provide clear guidelines for a person who develops an artificial intelligence system to test systems while certain laws and regulations related to the testing are waived or suspended; and

(4)  allow a person to engage in research, training, testing, or other pre-deployment activities to develop an artificial intelligence system.

(c)  The attorney general may not file or pursue charges against a program participant for violation of a law or regulation waived under this chapter that occurs during the testing period.

(d)  A state agency may not file or pursue punitive action against a program participant, including the imposition of a fine or the suspension or revocation of a license, registration, or other authorization, for violation of a law or regulation waived under this chapter that occurs during the testing period.

(e)  Notwithstanding Subsections (c) and (d), the requirements of Subchapter B, Chapter 552, may not be waived, and the attorney general or a state agency may file or pursue charges or action against a program participant who violates that subchapter.

Sec. 553.052.  APPLICATION FOR PROGRAM PARTICIPATION.  (a)  A person must obtain approval from the department and any applicable agency before testing an artificial intelligence system under the program.

(b)  The department by rule shall prescribe the application form.  The form must require the applicant to:

(1)  provide a detailed description of the artificial intelligence system the applicant desires to test in the program, and its intended use;

(2)  include a benefit assessment that addresses potential impacts on consumers, privacy, and public safety;

(3)  describe the applicant’s plan for mitigating any adverse consequences that may occur during the test; and

(4)  provide proof of compliance with any applicable federal artificial intelligence laws and regulations.

Sec. 553.053.  DURATION AND SCOPE OF PARTICIPATION.  (a)  A program participant approved by the department and each applicable agency may test and deploy an artificial intelligence system under the program for a period of not more than 36 months.

(b)  The department may extend a test under this chapter if the department finds good cause for the test to continue.

Sec. 553.054.  EFFICIENT USE OF RESOURCES.  The department shall coordinate the activities under this subchapter and any other law relating to artificial intelligence systems to ensure efficient system implementation and to streamline the use of department resources, including information sharing and personnel.

SUBCHAPTER C.  OVERSIGHT AND COMPLIANCE

Sec. 553.101.  COORDINATION WITH APPLICABLE AGENCY.  (a)  The department shall coordinate with all applicable agencies to oversee the operation of a program participant.

(b)  The council or an applicable agency may recommend to the department that a program participant be removed from the program if the council or applicable agency finds that the program participant’s artificial intelligence system:

(1)  poses an undue risk to public safety or welfare;

(2)  violates any federal law or regulation; or

(3)  violates any state law or regulation not waived under the program.

Sec. 553.102.  PERIODIC REPORT BY PROGRAM PARTICIPANT.  (a)  A program participant shall provide a quarterly report to the department.

(b)  The report shall include:

(1)  metrics for the artificial intelligence system’s performance;

(2)  updates on how the artificial intelligence system mitigates any risks associated with its operation; and

(3)  feedback from consumers and affected stakeholders that are using an artificial intelligence system tested under this chapter.

(c)  The department shall maintain confidentiality regarding the intellectual property, trade secrets, and other sensitive information it obtains through the program.

Sec. 553.103.  ANNUAL REPORT BY DEPARTMENT.  (a)  The department shall submit an annual report to the legislature.

(b)  The report shall include:

(1)  the number of program participants testing an artificial intelligence system in the program;

(2)  the overall performance and impact of artificial intelligence systems tested in the program; and

(3)  recommendations on changes to laws or regulations for future legislative consideration.

CHAPTER 554.  TEXAS ARTIFICIAL INTELLIGENCE COUNCIL

SUBCHAPTER A.  CREATION AND ORGANIZATION OF COUNCIL

Sec. 554.001.  CREATION OF COUNCIL.  (a)  The Texas Artificial Intelligence Council is created to:

(1)  ensure artificial intelligence systems in this state are ethical and developed in the public’s best interest;

(2)  ensure artificial intelligence systems in this state do not harm public safety or undermine individual freedoms by finding issues and making recommendations to the legislature regarding the Penal Code and Chapter 82, Civil Practice and Remedies Code;

(3)  identify existing laws and regulations that impede innovation in the development of artificial intelligence systems and recommend appropriate reforms;

(4)  analyze opportunities to improve the efficiency and effectiveness of state government operations through the use of artificial intelligence systems;

(5)  make recommendations to applicable state agencies regarding the use of artificial intelligence systems to improve the agencies’ efficiency and effectiveness;

(6)  evaluate potential instances of regulatory capture, including undue influence by technology companies or disproportionate burdens on smaller innovators caused by the use of artificial intelligence systems;

(7)  evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users through the use of artificial intelligence systems;

(8)  offer guidance and recommendations to the legislature on the ethical and legal use of artificial intelligence systems;

(9)  conduct and publish the results of a study on the current regulatory environment for artificial intelligence systems;

(10)  receive reports from the Department of Information Resources regarding the regulatory sandbox program under Chapter 553; and

(11)  make recommendations for improvements to the regulatory sandbox program under Chapter 553.

(b)  The council is administratively attached to the Department of Information Resources, and the department shall provide administrative support to the council as provided by this section.

(c)  The Department of Information Resources and the council shall enter into a memorandum of understanding detailing:

(1)  the administrative support the council requires from the department to fulfill the council’s purposes;

(2)  the reimbursement of administrative expenses to the department; and

(3)  any other provisions necessary to ensure the efficient operation of the council.

Sec. 554.002.  COUNCIL MEMBERSHIP.  (a)  The council is composed of seven members as follows:

(1)  three members of the public appointed by the governor;

(2)  two members of the public appointed by the lieutenant governor; and

(3)  two members of the public appointed by the speaker of the house of representatives.

(b)  Members of the council serve staggered four-year terms, with the terms of three or four members expiring every two years.

(c)  The governor shall appoint a chair from among the members, and the council shall elect a vice chair from its membership.

(d)  The council may establish an advisory board composed of individuals from the public who possess expertise directly related to the council’s functions, including technical, ethical, regulatory, and other relevant areas.

Sec. 554.003.  QUALIFICATIONS.  Members of the council must be Texas residents and have knowledge or expertise in one or more of the following areas:

(1)  artificial intelligence systems;

(2)  data privacy and security;

(3)  ethics in technology or law;

(4)  public policy and regulation;

(5)  risk management related to artificial intelligence systems;

(6)  improving the efficiency and effectiveness of governmental operations; or

(7)  anticompetitive practices and market fairness.

Sec. 554.004.  STAFF AND ADMINISTRATION.  The council may hire an executive director and other personnel as necessary to perform its duties.

SUBCHAPTER B.  POWERS AND DUTIES OF COUNCIL

Sec. 554.101.  ISSUANCE OF REPORTS.  (a)  The council may issue reports to the legislature regarding the use of artificial intelligence systems in this state.

(b)  The council may issue reports on:

(1)  the compliance of artificial intelligence systems in this state with the laws of this state;

(2)  the ethical implications of deploying artificial intelligence systems in this state;

(3)  data privacy and security concerns related to artificial intelligence systems in this state; or

(4)  potential liability or legal risks associated with the use of artificial intelligence systems in this state.

Sec. 554.102.  TRAINING AND EDUCATIONAL OUTREACH.  The council shall conduct training programs for state agencies and local governments on the use of artificial intelligence systems.

Sec. 554.103.  LIMITATION OF AUTHORITY.  The council may not:

(1)  adopt rules or promulgate guidance that is binding for any entity;

(2)  interfere with or override the operation of a state agency; or

(3)  perform a duty or exercise a power not granted by this chapter.

SECTION 5.  Section 325.011, Government Code, is amended to read as follows:

Sec. 325.011.  CRITERIA FOR REVIEW.  The commission and its staff shall consider the following criteria in determining whether a public need exists for the continuation of a state agency or its advisory committees or for the performance of the functions of the agency or its advisory committees:

(1)  the efficiency and effectiveness with which the agency or the advisory committee operates;

(2)(A)  an identification of the mission, goals, and objectives intended for the agency or advisory committee and of the problem or need that the agency or advisory committee was intended to address; and

(B)  the extent to which the mission, goals, and objectives have been achieved and the problem or need has been addressed;

(3)(A)  an identification of any activities of the agency in addition to those granted by statute and of the authority for those activities; and

(B)  the extent to which those activities are needed;

(4)  an assessment of authority of the agency relating to fees, inspections, enforcement, and penalties;

(5)  whether less restrictive or alternative methods of performing any function that the agency performs could adequately protect or provide service to the public;

(6)  the extent to which the jurisdiction of the agency and the programs administered by the agency overlap or duplicate those of other agencies, the extent to which the agency coordinates with those agencies, and the extent to which the programs administered by the agency can be consolidated with the programs of other state agencies;

(7)  the promptness and effectiveness with which the agency addresses complaints concerning entities or other persons affected by the agency, including an assessment of the agency’s administrative hearings process;

(8)  an assessment of the agency’s rulemaking process and the extent to which the agency has encouraged participation by the public in making its rules and decisions and the extent to which the public participation has resulted in rules that benefit the public;

(9)  the extent to which the agency has complied with:

(A)  federal and state laws and applicable rules regarding equality of employment opportunity and the rights and privacy of individuals; and

(B)  state law and applicable rules of any state agency regarding purchasing guidelines and programs for historically underutilized businesses;

(10)  the extent to which the agency issues and enforces rules relating to potential conflicts of interest of its employees;

(11)  the extent to which the agency complies with Chapters 551 and 552 and follows records management practices that enable the agency to respond efficiently to requests for public information;

(12)  the effect of federal intervention or loss of federal funds if the agency is abolished;

(13)  the extent to which the purpose and effectiveness of reporting requirements imposed on the agency justifies the continuation of the requirement; [and]

(14)  an assessment of the agency’s cybersecurity practices using confidential information available from the Department of Information Resources or any other appropriate state agency; and

(15)  an assessment of the agency’s use of artificial intelligence systems, as that term is defined by Section 551.001, Business & Commerce Code, in its operations and its oversight of the use of artificial intelligence systems by persons under the agency’s jurisdiction, and any related impact on the agency’s ability to achieve its mission, goals, and objectives, made using information available from the Department of Information Resources, the attorney general, or any other appropriate state agency.

SECTION 6.  Section 2054.068(b), Government Code, is amended to read as follows:

(b)  The department shall collect from each state agency information on the status and condition of the agency’s information technology infrastructure, including information regarding:

(1)  the agency’s information security program;

(2)  an inventory of the agency’s servers, mainframes, cloud services, and other information technology equipment;

(3)  identification of vendors that operate and manage the agency’s information technology infrastructure; [and]

(4)  any additional related information requested by the department; and

(5)  an evaluation of the use or considered use of artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, by each state agency.

SECTION 7.  Section 2054.0965(b), Government Code, is amended to read as follows:

(b)  Except as otherwise modified by rules adopted by the department, the review must include:

(1)  an inventory of the agency’s major information systems, as defined by Section 2054.008, and other operational or logistical components related to deployment of information resources as prescribed by the department;

(2)  an inventory of the agency’s major databases, artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, and applications;

(3)  a description of the agency’s existing and planned telecommunications network configuration;

(4)  an analysis of how information systems, components, databases, applications, and other information resources have been deployed by the agency in support of:

(A)  applicable achievement goals established under Section 2056.006 and the state strategic plan adopted under Section 2056.009;

(B)  the state strategic plan for information resources; and

(C)  the agency’s business objectives, mission, and goals;

(5)  agency information necessary to support the state goals for interoperability and reuse; and

(6)  confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources.

SECTION 8.  Not later than September 1, 2026, the attorney general shall post on the attorney general’s Internet website the information and online mechanism required by Section 552.102, Business & Commerce Code, as added by this Act.

SECTION 9.  (a)  Notwithstanding any other section of this Act, in a state fiscal year, a state agency to which this Act applies is not required to implement a provision found in another section of this Act that is drafted as a mandatory provision imposing a duty on the agency to take an action unless money is specifically appropriated to the agency for that fiscal year to carry out that duty.  The agency may implement the provision in that fiscal year to the extent other funding is available to the agency to do so.

(b)  If, as authorized by Subsection (a) of this section, the state agency does not implement the mandatory provision in a state fiscal year, the state agency, in its legislative budget request for the next state fiscal biennium, shall certify that fact to the Legislative Budget Board and include a written estimate of the costs of implementing the provision in each year of that next state fiscal biennium.

SECTION 10.  This Act takes effect January 1, 2026.

    President of the Senate           Speaker of the House      

I certify that H.B. No. 149 was passed by the House on April 23, 2025, by the following vote:  Yeas 146, Nays 3, 1 present, not voting; and that the House concurred in Senate amendments to H.B. No. 149 on May 30, 2025, by the following vote:  Yeas 121, Nays 17, 2 present, not voting.

______________________________

Chief Clerk of the House   

I certify that H.B. No. 149 was passed by the Senate, with amendments, on May 23, 2025, by the following vote:  Yeas 31, Nays 0.

______________________________

Secretary of the Senate   

APPROVED: __________________

                 Date       

          __________________

               Governor       


Appendix 2 — Model Ordinance: Responsible Use of Artificial Intelligence in City Operations

ORDINANCE NO. ______

AN ORDINANCE

relating to the responsible use of artificial intelligence systems by the City; establishing transparency, accountability, and oversight requirements; and providing for implementation and administration.

WHEREAS,

the City recognizes that artificial intelligence (“AI”) systems are increasingly used to improve operational efficiency, service delivery, data analysis, and internal workflows; and

WHEREAS,

the City further recognizes that certain uses of AI may influence decisions affecting residents, employees, vendors, or regulated parties and therefore require appropriate oversight; and

WHEREAS,

the City seeks to encourage responsible innovation while preserving public trust, transparency, and accountability; and

WHEREAS,

the Texas Legislature has enacted the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026, establishing statewide standards for AI use by government entities; and

WHEREAS,

the City recognizes that the adoption of artificial intelligence tools may, over time, change how work is performed and how staffing needs are structured, and that any such impacts are expected to occur gradually through attrition, reassignment, or role redesign rather than immediate workforce reductions;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF __________, TEXAS:

Section 1. Definitions

For purposes of this Ordinance:

  1. “Artificial Intelligence System” means a computational system that uses machine learning, statistical modeling, or related techniques to perform tasks normally associated with human intelligence, including analysis, prediction, classification, content generation, or prioritization.
  2. “Decision-Adjacent AI” means an AI system that materially influences, prioritizes, or recommends outcomes related to enforcement, eligibility, allocation of resources, personnel actions, procurement decisions, or public services, even if final decisions are made by a human.
  3. “High-Risk AI Use” means deployment of an AI system that directly or indirectly affects individual rights, access to services, enforcement actions, or legally protected interests.
  4. “Department” means any City department, office, division, or agency.

Section 2. Permitted Use of Artificial Intelligence

(a) Internal Productivity Uses. Departments may deploy AI systems for internal productivity and analytical purposes, including but not limited to:

  • Drafting and summarization of documents
  • Data analysis and forecasting
  • Workflow automation
  • Research and internal reporting
  • Customer-service chat tools providing general information (with disclaimers as appropriate)

Such uses shall not require prior Council approval but shall be subject to internal documentation requirements.

(b) Decision-Adjacent Uses. AI systems that influence or support decisions affecting residents, employees, vendors, or regulated entities may be deployed only in accordance with Sections 3 and 4 of this Ordinance.

Section 3. Prohibited Uses

No Department shall deploy or use an AI system that:

  1. Performs social scoring of individuals or groups based on behavior, personal traits, or reputation for the purpose of denying services, benefits, or rights;
  2. Intentionally discriminates against a protected class in violation of state or federal law;
  3. Generates or deploys biometric identification or surveillance in violation of constitutional protections;
  4. Produces or facilitates unlawful deep-fake or deceptive content;
  5. Operates as a fully automated decision-making system without meaningful human review in matters affecting legal rights or obligations.

Section 4. Oversight and Approval for High-Risk AI Uses

(a) Inventory Requirement. The City Manager shall maintain a centralized AI Systems Inventory identifying:

  • Each AI system in use
  • The Department deploying the system
  • The system’s purpose
  • Whether the use is classified as high-risk

(b) Approval Process. Prior to deployment of any High-Risk AI Use, the Department must:

  1. Submit a written justification describing the system’s purpose and scope;
  2. Identify the data sources used by the system;
  3. Describe human oversight mechanisms;
  4. Obtain approval from:
    • The City Manager (or designee), and
    • The City Attorney for legal compliance review.

(c) Human Accountability. Each AI system shall have a designated human owner responsible for:

  • Monitoring performance
  • Responding to errors or complaints
  • Suspending use if risks are identified

Section 5. Transparency and Public Disclosure

(a) Disclosure to the Public. When a City AI system interacts directly with residents, the City shall provide clear notice that the interaction involves AI.

(b) Public Reporting. The City shall publish annually:

  • A summary of AI systems in use
  • The general purposes of high-risk AI systems
  • Contact information for public inquiries

No proprietary or security-sensitive information shall be disclosed.

Section 6. Procurement and Vendor Requirements

All City contracts involving AI systems shall, where applicable:

  1. Require disclosure of AI functions;
  2. Prohibit undisclosed algorithmic decision-making;
  3. Allow the City to audit or review AI system outputs relevant to City operations;
  4. Require vendors to notify the City of material changes to AI functionality.

Section 7. Review and Sunset

(a) Periodic Review. High-risk AI systems shall be reviewed at least annually to assess:

  • Accuracy
  • Bias
  • Continued necessity
  • Compliance with this Ordinance

(b) Sunset Authority. The City Manager may suspend or terminate use of any AI system that poses unacceptable risk or fails compliance review.

Section 8. Training

The City shall provide appropriate training to employees involved in:

  • Deploying AI systems
  • Supervising AI-assisted workflows
  • Interpreting AI-generated outputs

Section 9. Severability

If any provision of this Ordinance is held invalid, such invalidity shall not affect the remaining provisions.

Section 10. Effective Date

This Ordinance shall take effect immediately upon adoption.


Appendix 3 — City Manager Administrative Regulation: Responsible Use of Artificial Intelligence

ADMINISTRATIVE REGULATION NO. ___

Subject: Responsible Use of Artificial Intelligence (AI) in City Operations
Authority: Ordinance No. ___ (Responsible Use of Artificial Intelligence)
Issued by: City Manager
Effective Date: __________

1. Purpose

This Administrative Regulation establishes operational procedures for the responsible deployment, oversight, and monitoring of artificial intelligence (AI) systems used by the City, consistent with adopted Council policy and applicable state law.

The intent is to:

  • Enable rapid adoption of AI for productivity and service delivery;
  • Ensure transparency and accountability for higher-risk uses; and
  • Protect the City, employees, and residents from unintended consequences.

2. Scope

This regulation applies to all City departments, offices, and divisions that:

  • Develop, procure, deploy, or use AI systems; or
  • Rely on vendor-provided software that includes AI functionality.

3. AI System Classification

Departments shall classify AI systems into one of the following categories:

A. Tier 1 — Internal Productivity AI

Examples:

  • Document drafting and summarization
  • Data analysis and forecasting
  • Internal research and reporting
  • Workflow automation

Oversight Level:

  • Department-level approval
  • Registration in AI Inventory

B. Tier 2 — Decision-Adjacent AI

Examples:

  • Permit or inspection prioritization
  • Vendor or application risk scoring
  • Resource allocation recommendations
  • Enforcement or compliance triage

Oversight Level:

  • City Manager approval
  • Legal review
  • Annual performance review

C. Tier 3 — High-Risk AI

Examples:

  • AI influencing enforcement actions
  • Eligibility determinations
  • Public safety analytics
  • Biometric or surveillance tools

Oversight Level:

  • City Manager approval
  • City Attorney review
  • Documented human-in-the-loop controls
  • Annual audit and Council notification

4. AI Systems Inventory

The City Manager’s Office shall maintain a centralized AI Systems Inventory, which includes:

  • System name and vendor
  • Department owner
  • Purpose and classification tier
  • Date of deployment
  • Oversight requirements

Departments shall update the inventory prior to deploying any new AI system.

5. Approval Process

A. Tier 1 Systems

  • Approved by Department Director
  • Registered in inventory

B. Tier 2 and Tier 3 Systems

Departments must submit:

  1. A description of the system and intended use
  2. Data sources and inputs
  3. Description of human oversight
  4. Risk mitigation measures

Approval required from:

  • City Manager (or designee)
  • City Attorney (for legal compliance)

6. Human Oversight & Accountability

Each AI system shall have a designated System Owner responsible for:

  • Monitoring system outputs
  • Responding to errors or complaints
  • Suspending use if risks emerge
  • Coordinating audits or reviews

No AI system may operate as a fully autonomous decision-maker for actions affecting legal rights or obligations.

7. Vendor & Procurement Controls

Procurement involving AI systems shall:

  • Identify AI functionality explicitly in solicitations
  • Require vendors to disclose material AI updates
  • Prohibit undisclosed algorithmic decision-making
  • Preserve City audit and review rights

8. Monitoring, Review & Sunset

  • Tier 2 and Tier 3 systems shall undergo annual review.
  • Systems may be suspended or sunset if:
    • Accuracy degrades
    • Bias is identified
    • Legal risk increases
    • The system no longer serves a defined purpose

9. Training

Departments deploying AI shall ensure appropriate staff training covering:

  • Proper interpretation of AI outputs
  • Limitations of AI systems
  • Escalation and error-handling procedures

10. Reporting to Council

The City Manager shall provide Council with:

  • An annual summary of AI systems in use
  • Identification of Tier 3 (High-Risk) systems
  • Any material incidents or corrective actions

11. Effective Date

This Administrative Regulation is effective immediately upon issuance.

12. Workforce Considerations

The use of artificial intelligence systems may change job functions and workflows over time. Departments shall:

  • Use AI to augment employee capabilities wherever possible;
  • Prioritize retraining, reassignment, and natural attrition when workflows change;
  • Coordinate with Human Resources before deploying AI systems that materially alter job duties; and
  • Recognize that long-term staffing impacts, if any, remain subject to City Manager and City Council authority.

Appendix 4 — Public-Facing FAQ: Responsible Use of Artificial Intelligence in City Operations

What is this ordinance about?

This ordinance establishes clear rules for how the City may use artificial intelligence (AI) tools. It allows the City to use modern technology to improve efficiency and service delivery while ensuring that higher-risk uses are transparent, accountable, and overseen by people.

Is the City already using artificial intelligence?

Yes. Like most modern organizations, the City already uses limited AI-enabled tools for tasks such as document drafting, data analysis, customer service support, and vendor-provided software systems.

This ordinance ensures those tools are used consistently and responsibly.

Is this ordinance banning artificial intelligence?

No.
The ordinance does not ban AI. It encourages responsible adoption of AI for productivity and internal efficiency while placing guardrails on uses that could affect people’s rights or access to services.

Why is the City adopting rules now?

AI tools are becoming more common and more capable. Clear rules help ensure:

  • Transparency in how AI is used
  • Accountability for outcomes
  • Compliance with new Texas law
  • Public trust in City operations

The Texas Legislature recently enacted statewide standards for AI use by government entities, and this ordinance aligns the City with those expectations.

Will artificial intelligence affect City jobs?

AI may change how work is done over time, just as previous technologies have.

This ordinance does not authorize immediate workforce reductions. Any long-term impacts are expected to occur gradually and, where possible, through:

  • Natural attrition
  • Reassignment
  • Retraining
  • Changes in job duties

Final staffing decisions remain with City leadership and City Council.

Will AI replace City employees?

AI tools are intended to assist employees, not replace human judgment. For higher-risk uses, the ordinance requires meaningful human oversight and accountability.

Can AI make decisions about me automatically?

No.
The ordinance prohibits fully automated decision-making that affects legal rights, enforcement actions, or access to services without human review.

AI may provide information or recommendations, but people remain responsible for decisions.

Will the City use AI for surveillance or facial recognition?

The ordinance prohibits AI uses that violate constitutional protections, including improper biometric surveillance.

Any use of biometric or surveillance-related AI would require strict legal review and compliance with state and federal law.

How will I know if I’m interacting with AI?

If the City uses AI systems that interact directly with residents, the City must clearly disclose that you are interacting with an AI system.

Does this apply to police or public safety?

Yes.
AI tools used in public safety contexts are considered higher-risk and require additional review, approval, and oversight. AI systems may not independently make enforcement decisions.

Who is responsible if an AI system makes a mistake?

Each AI system has a designated City employee responsible for monitoring its use, addressing errors, and suspending the system if necessary.

Responsibility remains with the City—not the software.

Will the public be able to see how AI is used?

Yes.
The City will publish an annual summary describing:

  • The types of AI systems in use
  • Their general purpose
  • How residents can ask questions or raise concerns

Sensitive or proprietary information will not be disclosed.

Does this create a new board or bureaucracy?

No.
Oversight is handled through existing City leadership and administrative structures.

Is there a cost to adopting this ordinance?

There is no direct cost associated with adoption. Over time, responsible AI use may help control costs by improving productivity and efficiency.

How often will this policy be reviewed?

Higher-risk AI systems are reviewed annually. The ordinance itself may be updated as technology and law evolve.

Who can I contact with questions or concerns?

Residents may contact the City Manager’s Office or submit inquiries through the City’s website. Information on AI use and reporting channels will be publicly available.

Bottom Line

This ordinance ensures the City:

  • Uses modern tools responsibly
  • Maintains human accountability
  • Protects public trust
  • Aligns with Texas law
  • Adapts thoughtfully to technological change

The Municipal & Business Workquake of 2026: Why Cities Must Redesign Roles Now—Before Attrition Does It for Them

A collaboration between Lewis McLain & AI

Cities are about to experience an administrative shift that will look nothing like a “tech revolution” and nothing like a classic workforce reduction. It will arrive as a workquake: a sudden drop in the labor required to complete routine tasks across multiple departments, driven by AI systems that can ingest documents, apply rules, assemble outputs, and draft narratives at scale.

The danger is not that cities will replace everyone with software. The danger is more subtle and far more likely: cities will allow AI to hollow out core functions unintentionally, through non-replacement hiring, scattered tool adoption, and informal workflow shortcuts—until the organization’s accountability structure no longer matches the work being done.

In 2026, the right posture is not fascination or fear. It is proactive redesign.


I. The Real Change: Task Takeover, Not Job Replacement

Municipal roles often look “human” because they involve public trust, compliance, and service. But much of the day-to-day work inside those roles is structured:

  • collecting inputs
  • applying policy checklists
  • preparing standardized packets
  • producing routine reports
  • tracking deadlines
  • drafting summaries
  • reconciling variances
  • adding narrative to numbers

Those tasks are precisely what modern AI systems now handle with speed and consistency. What remains human is still vital—but it is narrower: judgment, discretion, ethics, and accountability.

That creates the same pattern across departments:

  • the production layer shrinks rapidly
  • the review and exception layer becomes the job

Cities that don’t define this shift early will experience it late—as a staffing and governance crisis.


II. Example- City Secretary: Where Governance Work Becomes Automated

The city secretary function sits at the center of formal governance: agendas, minutes, public notices, records, ordinances, and elections. Much of the labor in this area is procedural and document-heavy.

Tasks likely to be absorbed quickly

  • Agenda assembly from departmental submissions
  • Packet compilation and formatting
  • Deadline tracking for posting and notices
  • Records indexing and retrieval
  • Draft minutes from audio/video with time stamps
  • Ordinance/resolution histories and cross-references

What shrinks

  • clerical assembly roles
  • manual transcription
  • routine records handling

What becomes more important

  • legal compliance judgment (Open Meetings, Public Information)
  • defensibility of the record
  • election integrity protocols
  • final human review of public-facing outputs

In other words: the city secretary role does not disappear. It becomes governance QA—with higher stakes and fewer support layers.


III. Example – Purchasing & Procurement: Where Process Becomes Automated Screening

Purchasing has always been a mix of routine compliance and high-risk discretion. AI hits the routine side first, fast.

Tasks likely to be absorbed quickly

  • quote comparisons and bid tabulations
  • price benchmarking against history and peers
  • contract template population
  • insurance/required-doc compliance checks
  • renewal tracking and vendor performance summaries
  • anomaly detection (odd pricing, split purchases, policy exceptions)

What shrinks

  • bid tabulators
  • quote chasers
  • contract formatting staff
  • clerical procurement roles

What becomes more important

  • vendor disputes and negotiations
  • integrity controls (conflicts, favoritism risk)
  • exception approvals with documented reasoning
  • strategic sourcing decisions

Procurement shifts from “processing” to risk-managed decisioning.


IV. Example – Budget Analysts: Where “Analysis” Separates from “Assembly”

Budget offices are often mistaken as purely analytical. In reality, a large share of work is assembly: gathering departmental submissions, normalizing formats, building tables, writing routine narratives, and explaining variances.

Tasks likely to be absorbed quickly

  • ingestion and normalization of department requests
  • enforcement of submission rules and formatting
  • auto-generated variance explanations
  • draft budget narratives (department summaries, highlights)
  • scenario tables (base, constrained, growth cases)
  • continuous budget-to-actual reconciliation

What shrinks

  • entry-level budget analysts
  • table builders and narrative drafters
  • budget book production labor

What becomes more important

  • setting assumptions and policy levers
  • framing tradeoffs for leadership and council
  • long-range fiscal forecasting judgment
  • telling the truth clearly under political pressure

Budget staff shift from spreadsheet production to decision support and persuasion with integrity.


V. Example – Police & Fire Data Analysts: Where Reporting Becomes Real-Time Patterning

Public safety analytics is one of the most automatable municipal domains because it is data-rich, structured, and continuous. The “report builder” role is especially vulnerable.

Tasks likely to be absorbed quickly

  • automated monthly/quarterly performance reporting
  • response-time distribution analysis
  • hotspot mapping and geospatial summaries
  • staffing demand pattern detection
  • anomaly flagging (unusual patterns in calls, activity, response)
  • draft CompStat-style narratives and slide-ready briefings

What shrinks

  • manual report builders
  • map producers
  • dashboard-only roles
  • grant-report drafters relying on routine metrics

What becomes more important

  • human interpretation (what the pattern means operationally)
  • explaining limitations and avoiding false certainty
  • bias and fairness oversight
  • defensible analytics for court, public inquiry, or media scrutiny

Public safety analytics becomes less about producing charts and more about protecting truth and trust.


VI. Example – More Roles Next in Line

Permitting & Development Review

AI can quickly absorb:

  • completeness checks
  • code cross-referencing
  • workflow routing and status updates
  • templated staff reports

Humans remain essential for:

  • discretionary judgments
  • negotiation with applicants
  • interpreting ambiguous code situations
  • public-facing case management

HR Analysts

AI absorbs:

  • classification comparisons
  • market surveys and comp modeling
  • policy drafting and FAQ support

Humans remain for:

  • discipline, negotiations, sensitive cases
  • equity judgments and culture
  • leadership counsel and conflict resolution

Grants Management

AI absorbs:

  • opportunity scanning and matching
  • compliance calendars
  • draft narrative sections and attachments lists

Humans remain for:

  • strategy (which grants matter)
  • partnerships and commitments
  • risk management and audit defense

VII. The Practical Reality in Cities: Attrition Is the Mechanism

This won’t arrive as dramatic layoffs. It will arrive as:

  • hiring freezes
  • “we won’t backfill that position”
  • consolidation of roles
  • sudden expectations that one person can do what three used to do

If cities do nothing, AI will still be adopted—piecemeal, unevenly, and without governance redesign. That produces an organization with:

  • fewer people
  • unclear accountability
  • heavier compliance risk
  • fragile institutional memory

VIII. What “Proactive” Looks Like in 2026

Cities need to act immediately in four practical ways:

  1. Define what must remain human
    • elections integrity
    • public record defensibility
    • procurement exceptions and ethics
    • budget assumption-setting and council framing
    • public safety interpretation and bias oversight
  2. Separate production from review
    • let AI assemble
    • require humans to verify, approve, and own
  3. Rewrite job descriptions now
    • stop hiring for assembly work
    • hire for judgment, auditing, communication, and governance
  4. Build the governance layer
    • standards for AI outputs
    • audit trails
    • transparency policies
    • escalation rules
    • periodic review of AI-driven decisions

This is not an IT upgrade. It’s a redesign of how public authority is exercised.


Conclusion: The Choice Cities Face

Cities will adopt AI regardless—because the savings and speed will be undeniable. The only choice is whether the city adopts AI intentionally or accidentally.

If adopted intentionally, AI becomes:

  • a productivity tool
  • a compliance enhancer
  • a service accelerator

If adopted accidentally, AI becomes:

  • a quiet hollowing of institutional capacity
  • a transfer of control from policy to tool
  • and eventually a governance failure that will be blamed on people who never had the chance to redesign the system

2026 is early enough to steer the transition.
Waiting will not preserve the old model. It will only ensure the new one arrives without a plan.

End note: I usually spend a couple of days (minimum) completing the compilation of all my bank and credit card records, assigning a classification, summarizing and giving my CPA a complete set of documents. I uploaded the documents to AI, gave it instructions to prepare the package, answering a list of questions regarding reconciliation and classification issues. Two hours later, I had the full package with comparisons to past years from the returns I also uploaded. I was 100% ready on New Year’s Eve just waiting for the 1099’s to be sent to me by the end of January. Meanwhile, I have been having AI enhance and create a comprehensive accounting system with beautiful schedules like cash flow, taxation notes, checklists with new IRS rules and general help – more than I was getting from CPA. I’ll be able to actually take over the CPA duties. It’s just the start of the things I can turn over to AI while I become the editor and reviewer instead of the dreaded grunt work. LFM

The Infrastructure We Don’t See: Aging Gas Systems, Hidden Risks, and the Case for Annual Accountability

A collaboration between Lewis McLain & AI

It’s not if, but when!

Natural gas infrastructure is the most invisible—and therefore the most misunderstood—critical system in modern cities. Power lines are visible. Water mains announce themselves through pressure and flow. Roads crack and bridges age in plain sight. But gas lines remain buried, silent, and largely forgotten—until something goes wrong.

That invisibility is not benign. It creates a governance gap where responsibility is fragmented, risk is assumed rather than measured, and accountability is episodic instead of continuous. As cities grow denser, older, and more complex, that gap widens.

This essay makes a simple but demanding case: cities should require annual, technical accountability briefings from gas utilities and structured gas-safety evaluations for high-occupancy buildings—public and private—because safety is no longer assured by age, ownership boundaries, or regulatory compliance alone.

The ultimate question is not whether gas systems are regulated. They are.
The question is whether, at the local level, we are actually safer than we were a year ago.


I. The Aging Gas Network: A Technical Reality, Not a Hypothetical

Much of the U.S. gas distribution network was installed decades ago. While significant modernization has occurred, legacy materials—particularly cast iron and bare steel—still exist in pockets, often in the very neighborhoods where density, redevelopment, and consequence are highest.

These systems age in predictable ways:

  • Material degradation such as corrosion, joint failure, and metal fatigue
  • Ground movement from expansive soils, drought cycles, and freeze–thaw conditions
  • Pressure cycling driven by modern load variability
  • Construction interaction, including third-party damage during roadway, utility, and redevelopment projects

Technically speaking, aging is not a binary condition. It is a curve. Systems do not fail all at once; they fail where stress, material fatigue, and external disturbance intersect. Cities that approve redevelopment without understanding where those intersections lie are not managing risk—they are inheriting it.


II. Monitoring Is Better Than Ever—But It Is Not Replacement

Modern gas utilities deploy advanced leak detection technologies that did not exist a generation ago: mobile survey vehicles, high-sensitivity handheld sensors, aerial detection, and in some cases continuous monitoring.

Regulatory standards have improved as well. Leak surveys are more frequent, detection thresholds are lower, and repair timelines are clearer. From a technical standpoint, the industry is better at finding leaks than it was even a few years ago.

But monitoring is inherently reactive. It detects deterioration after it has begun. It does not restore structural integrity. It does not change the age profile of the system. It does not eliminate brittle joints or corrosion-prone materials.

Replacement is the only permanent risk reduction. And replacement is expensive, disruptive, and largely invisible unless cities require it to be discussed openly.


III. Why Annual Gas Utility Accountability Briefings Are Essential

Gas utilities operate under long-range capital replacement programs driven by regulatory approval, rate recovery, and internal prioritization models. Cities operate under land-use approvals, zoning changes, density increases, and redevelopment pressures that can change risk far faster than infrastructure plans adjust.

An annual gas utility accountability briefing is how those two worlds reconnect.

Not a promotional update. Not a general safety overview. But a technical, decision-grade briefing that allows city leadership to understand:

  • What materials remain in the ground
  • Where risk is concentrated
  • How fast legacy systems are being retired
  • Whether replacement is keeping pace with growth
  • Where development decisions may be increasing consequence

Without this, cities are effectively approving new intensity above ground while assuming adequacy below it.


IV. The Forgotten Segment: From the Meter to the Building

Most gas incidents that injure people do not originate in transmission pipelines or deep mains. They occur closest to occupied space—often in the short stretch between the gas meter and the building structure.

Legally, responsibility is clear:

  • The utility owns and maintains the system up to the meter.
  • The property owner owns everything downstream.

Assessment, however, is not.

Post-meter gas piping is frequently:

  • Older steel without modern corrosion protection
  • Stressed by foundation movement
  • Altered during remodels and additions
  • Poorly documented
  • Rarely inspected after initial construction

Utilities generally do not inspect customer-owned piping. Building departments see it only during permitted work. Fire departments respond after leaks are reported. Property owners often do not realize they own it.

This creates a true orphaned asset class: high-consequence infrastructure with no lifecycle oversight.


V. Responsibility Alone Is Not Safety

Cities often take comfort in the legal distinction: “That’s private property.” Legally, that is correct. Practically, it is insufficient.

Gas does not respect ownership boundaries. A failure inside a school, apartment building, restaurant, or nursing home becomes a public emergency immediately.

Risk governance does not require cities to assume liability. It requires them to ensure that someone is actually evaluating risk in places where failure would have severe consequences.


VI. Required Gas-Safety Evaluations for High-Occupancy Properties

This is the missing pillar of modern gas safety.

Just as elevators, fire suppression systems, and boilers undergo periodic inspection, gas piping systems in high-occupancy buildings should be subject to structured evaluation—regardless of whether the building is publicly or privately owned.

Facilities warranting mandatory evaluation include:

  • Schools (public and private)
  • Daycares
  • Nursing homes and assisted-living facilities
  • Hospitals and clinics
  • Large multifamily buildings
  • Assembly venues (churches, theaters, gyms)
  • Restaurants and food-service establishments
  • High-load commercial and industrial users

These are places where evacuation is difficult, ignition sources are common, and consequences are magnified.

A gas-safety evaluation should assess:

  • Condition and material of post-meter piping
  • Corrosion, support, and anchoring
  • Stress at building entry points
  • Evidence of undocumented modifications or abandoned lines
  • Accessibility and labeling of shutoff valves

These evaluations need not be frequent. They need to be periodic, triggered, and credible.


VII. Triggers That Make the System Work

Cities can implement this framework without blanket inspections by tying evaluations to specific events:

  • Change of occupancy or use
  • Major remodels or additions
  • Buildings reaching certain age thresholds when work is permitted
  • Repeated gas odor or leak responses
  • Sale or transfer of high-occupancy properties

This approach focuses effort where risk is most likely to have changed.


VIII. Public vs. Private: One Standard of Care

A gas explosion in a public school is not meaningfully different from one in a private daycare or restaurant. The victims do not care who owned the pipe.

A city that limits safety evaluation requirements to public buildings is acknowledging risk—but only partially. The standard should be risk-based, not ownership-based.


IX. Are We Better or Worse Off Than a Year Ago?

Technically, the answer is nuanced.

We are better off nationally in detection capability and regulatory clarity. Technology has improved. Survey frequency has increased. Reporting is stronger.

But many cities are likely worse off locally in exposure:

  • Buildings are older
  • Density is higher
  • Construction activity is heavier
  • Post-meter piping remains largely unassessed
  • High-occupancy facilities rely on outdated assumptions

So the honest answer is this:

We are better at finding problems—but not necessarily better at eliminating risk where people live, work, and gather.


X. Governance Is the Missing Link

Gas safety is no longer only an engineering problem. It is a governance problem.

Cities already regulate:

  • Land use and density
  • Building permits and occupancy
  • Business licensing
  • Emergency response coordination

Requiring annual gas utility accountability briefings and targeted gas-safety evaluations does not expand government arbitrarily. It closes a blind spot that modern urban conditions have exposed.


Conclusion: Asking the Right Question, Every Year

The most important question cities should ask annually is not:

“Did the utility comply with regulations?”

It is:

“Given our growth, our buildings, and our infrastructure, are we actually safer than we were last year?”

If city leaders cannot answer that clearly—above ground and below—it is not because the answer is unknowable.

It is because no one has required it to be known.


**Appendix A

Model Ordinance: Gas Infrastructure Accountability and High-Occupancy Safety Evaluations**

This model ordinance is designed to improve transparency, situational awareness, and public safety without transferring ownership, operational control, or liability from utilities or property owners to the City.


Section 1. Purpose and Findings

1.1 Purpose

The purpose of this ordinance is to:

  1. Improve transparency regarding the condition, monitoring, and replacement of gas infrastructure;
  2. Ensure that risks associated with aging gas systems are identified and reduced over time;
  3. Require periodic gas safety evaluations for high-occupancy buildings where consequences of failure are greatest;
  4. Strengthen coordination among gas utilities, property owners, and City emergency services; and
  5. Establish consistent, decision-grade information for City leadership.

1.2 Findings

The City Council finds that:

  1. Natural gas infrastructure is largely underground and not visible to the public.
  2. Portions of the gas system—including customer-owned piping—may age without systematic reassessment.
  3. Increased density, redevelopment, and construction activity elevate the consequences of gas failures.
  4. Existing regulatory frameworks do not provide city-specific visibility into system condition or replacement progress.
  5. Periodic reporting and targeted evaluation improve public safety without assuming utility or private ownership responsibilities.

Section 2. Annual Gas Utility Accountability Briefing

2.1 Requirement

Each gas utility operating within the City shall provide an Annual Gas Infrastructure Accountability Briefing to the City Council or its designated committee.

2.2 Scope

The briefing shall address, at a minimum:

  • Pipeline materials and age profile;
  • Replacement progress and future plans;
  • Leak detection, classification, and repair performance;
  • High-consequence areas and impacts of development;
  • Construction coordination and damage prevention;
  • Emergency response readiness and communication protocols.

2.3 Format and Standards

  • Briefings shall include written materials, maps, and data tables.
  • Metrics shall be presented in a year-over-year comparable format.
  • Information shall be technical, factual, and suitable for governance decision-making.

2.4 No Transfer of Liability

Nothing in this section shall be construed to transfer ownership, maintenance responsibility, or operational control of gas facilities to the City.


Section 3. High-Occupancy Gas Safety Evaluations

3.1 Covered Facilities

Gas safety evaluations are required for the following facilities, whether publicly or privately owned:

  • Schools (public and private)
  • Daycare facilities
  • Nursing homes and assisted-living facilities
  • Hospitals and medical clinics
  • Multifamily buildings exceeding [X] dwelling units
  • Assembly occupancies exceeding [X] persons
  • Restaurants and commercial food-service establishments
  • Other facilities designated by the Fire Marshal as high-consequence occupancies

3.2 Scope of Evaluation

Evaluations shall assess:

  • Condition and materials of post-meter gas piping
  • Corrosion potential and structural support
  • Stress at building entry points and foundations
  • Evidence of undocumented modifications or abandoned piping
  • Accessibility, labeling, and operation of shutoff valves

3.3 Qualified Evaluators

Evaluations shall be conducted by:

  • Licensed plumbers,
  • Licensed mechanical contractors, or
  • Professional engineers with gas system experience.

3.4 Triggers

Evaluations shall be required upon:

  • Change of occupancy or use;
  • Major remodels or building additions;
  • Buildings reaching [X] years of age when permits are issued;
  • Repeated gas odor complaints or leak responses;
  • Sale or transfer of covered properties, if adopted by the City.

Section 4. Documentation and Compliance

4.1 Certification

Property owners shall submit documentation certifying completion of required evaluations.

4.2 Corrective Action

Identified hazards shall be corrected within timeframes established by code officials.

4.3 Enforcement

Non-compliance may result in:

  • Withholding of permits or certificates of occupancy;
  • Temporary suspension of approvals;
  • Administrative penalties as authorized by law.

Section 5. Education and Coordination

The City shall:

  • Provide educational materials clarifying ownership and safety responsibilities;
  • Coordinate with gas utilities on public outreach;
  • Integrate findings into emergency response planning and training.


**Appendix B

Annual Gas Utility Accountability Briefing — Preparation Checklist**

This checklist ensures annual briefings are consistent, measurable, and focused on risk reduction rather than general compliance.


I. System Inventory & Condition

☐ Total pipeline miles within city limits (distribution vs. transmission)
☐ Pipeline miles by material type
☐ Pipeline miles by decade installed
☐ Location and extent of remaining legacy materials
☐ Identification of oldest segments still in service


II. Replacement Progress

☐ Miles replaced in the previous year (by material type)
☐ Five-year replacement plan with schedules
☐ Funded vs. unfunded replacement projects
☐ Year-over-year reduction in legacy materials
☐ Explanation of changes from prior plans


III. Leak Detection & Repair Performance

☐ Total leaks detected (normalized per mile)
☐ Leak classification breakdown
☐ Average and maximum repair times by class
☐ Repeat leak locations identified and mapped
☐ Root-cause analysis of recurring issues


IV. Monitoring Technology

☐ Detection technologies currently deployed
☐ Survey frequency achieved vs. required
☐ Use of advanced or emerging detection tools
☐ Known limitations of monitoring methods


V. High-Consequence Areas

☐ Definition and criteria for high-consequence zones
☐ Updated risk maps
☐ Impact of new development on risk profile
☐ Trunk lines serving rapidly densifying areas


VI. Construction & Damage Prevention

☐ Third-party damage incidents
☐ 811 ticket response performance
☐ High-risk project types identified
☐ Coordination procedures with City capital projects


VII. Emergency Response Readiness

☐ Incident response timelines
☐ Coordination with fire, police, and emergency management
☐ Date and scope of last joint exercise or drill
☐ Public communication and notification protocols


VIII. Customer-Owned (Post-Meter) Piping

☐ Incidents involving post-meter piping
☐ Common failure materials or conditions
☐ Customer education and outreach efforts
☐ Voluntary inspection or assistance programs


IX. Forward-Looking Risk Assessment

☐ Top unresolved risks
☐ Areas of greatest concern
☐ Commitments for the next 12 months
☐ Clear answer to:
“Are we safer than last year—and why?”


Closing Note

A briefing that cannot complete this checklist is not incomplete—it is revealing where risk remains unmanaged.

That visibility is the purpose of accountability.

An Update on Drone Uses in Texas Municipalities

A second collaboration between Lewis McLain & AI

From Tactical Tools to a Quiet Redefinition of First Response

A decade ago, a municipal drone program in Texas usually meant a small team, a locked cabinet, and a handful of specially trained officers who were called out when circumstances justified it. The drone was an accessory—useful, sometimes impressive, but peripheral to the ordinary rhythm of public safety.

That is no longer the case.

Across Texas, drones are being absorbed into the daily mechanics of emergency response. In a growing number of cities, they are no longer something an officer brings to a scene. They are something the city sends—often before the first patrol car, engine, or ambulance has cleared an intersection.

This shift is subtle, technical, and easily misunderstood. But it represents one of the most consequential changes in municipal public safety design in a generation.


The quiet shift from tools to systems

The defining change is not better cameras or longer flight times. It is program design.

Early drone programs were built around people: pilots, certifications, and equipment checklists. Today’s programs are built around systems—launch infrastructure, dispatch logic, real-time command centers, and policies that define when a drone may be used and, just as importantly, when it may not.

Cities like Arlington illustrate this evolution clearly. Arlington’s drones are not stored in trunks or deployed opportunistically. They launch from fixed docking stations, controlled through the city’s real-time operations center, and are sent to calls the way any other responder would be. The drone’s role is not to replace officers, but to give them something they rarely had before arrival: certainty.

Is someone actually inside the building? Is the suspect still there? Is the person lying in the roadway injured or already moving? These are small questions, but they shape everything that follows. In many cases, the presence of a drone overhead resolves a situation before physical contact ever occurs.

That pattern—early information reducing risk—is now being repeated, in different forms, across the state.


North Texas as an early laboratory

In North Texas, the progression from experimentation to normalization is especially visible.

Arlington’s program has become a reference point, not because it is flashy, but because it works. Drones are treated as routine assets, subject to policy, supervision, and after-action review. Their value is measured in response times and avoided escalations, not in flight hours.

Nearby, Dallas is navigating a more complex path. Dallas already operates one of the most active municipal drone programs in the state, but scale changes everything. Dense neighborhoods, layered airspace, multiple airports, and heightened civil-liberties scrutiny mean that Dallas cannot simply replicate what smaller cities have done.

Instead, Dallas appears to be doing something more consequential: deliberately embedding “Drone as First Responder” capability into its broader public-safety technology framework. Procurement language and public statements now describe drones verifying caller information while officers respond—a quiet but important acknowledgement that drones are becoming part of the dispatch process itself. If Dallas succeeds, it will establish a model for large, complex cities that have so far watched DFR from a distance.

Smaller cities have moved faster.

Prosper, for example, has embraced automation as a way to overcome limited staffing and long travel distances. Its program emphasizes speed—sub-two-minute arrivals made possible by automated docking stations that handle charging and readiness without human intervention. Prosper’s experience suggests that cities do not have to grow into DFR gradually; some can leap directly to system-level deployment.

Cities like Euless represent another important strand of adoption. Their programs are smaller, more cautious, and intentionally bounded. They launch drones to specific call types, collect experience, and adjust policy as they go. These cities matter because they demonstrate how DFR spreads laterally, city by city, through observation and imitation rather than mandates or statewide directives.


South Texas and the widening geography of DFR

DFR is not a North Texas phenomenon.

In the Rio Grande Valley, Edinburg has publicly embraced dispatch-driven drone response for crashes, crimes in progress, and search-and-rescue missions, including night operations using thermal imaging. In regions where heat, terrain, and distance complicate traditional response, the value of rapid aerial awareness is obvious.

Further west, Laredo has framed drones as part of a broader rapid-response network rather than a narrow policing tool. Discussions there extend beyond observation to include overdose response and medical support, pointing toward a future where drones do more than watch—they enable intervention while ground units close the gap.

Meanwhile, cities like Pearland have quietly done the hardest work of all: making DFR ordinary. Pearland’s early focus on remote operations and program governance is frequently cited by other cities, even when it draws little public attention. Its lesson is simple but powerful: the more boring a drone program becomes, the more likely it is to scale.


What 2026 will likely bring

By 2026, Texas municipalities will no longer debate drones in abstract terms. The conversation will shift to coverage, performance, and restraint.

City leaders will ask how much of their jurisdiction can be reached within two or three minutes, and what it costs to achieve that standard. DFR coverage maps will begin to resemble fire-station service areas, and response-time percentiles will replace anecdotal success stories.

Dispatch ownership will matter more than pilot skill. The most successful programs will be those in which drones are managed as part of the call-taking and response ecosystem, not as specialty assets waiting for permission. Pilots will become supervisors of systems, not just operators of aircraft.

At the same time, privacy will increasingly determine the pace of expansion. Cities that define limits early—what drones will never be used for, how long video is kept, who can access it—will move faster and with less friction. Those that delay these conversations will find themselves stalled, not by technology, but by public distrust.

Federal airspace rules will continue to separate tactical programs from scalable ones. Dense metro areas will demand more sophisticated solutions—automated docks, detect-and-avoid capabilities, and carefully designed flight corridors. The cities that solve these problems will not just have better drones; they will have better systems.

And perhaps most telling of all, drones will gradually fade from public conversation. When residents stop noticing them—when a drone overhead is no more remarkable than a patrol car passing by—the transformation will be complete.


A closing thought

Texas cities are not adopting drones because they are fashionable or futuristic. They are doing so because time matters, uncertainty creates risk, and early information saves lives—sometimes by prompting action, and sometimes by preventing it.

By 2026, the question will not be whether drones belong in municipal public safety. It will be why any city, given the chance to act earlier and safer, would choose not to.


Looking Ahead to 2026: When Drones Become Ordinary

By 2026, the most telling sign of success for municipal drone programs in Texas will not be innovation, expansion, or even capability. It will be normalcy.

The early years of public-safety drones were marked by novelty. A drone launch drew attention, generated headlines, and often triggered anxiety about surveillance or overreach. That phase is already fading. What is emerging in its place is quieter and far more consequential: drones becoming an assumed part of the response environment, much like radios, body cameras, or computer-aided dispatch systems once did.

The conversation will no longer revolve around whether a city has drones. Instead, it will focus on coverage and performance. City leaders will ask how quickly aerial eyes can reach different parts of the city, how often drones arrive before ground units, and what percentage of priority calls benefit from early visual confirmation. Response-time charts and service-area maps will replace anecdotes and demonstrations. In this sense, drones will stop being treated as technology and start being treated as infrastructure.

This shift will also clarify responsibility. The most mature programs will no longer center on individual pilots or specialty units. Ownership will move decisively toward dispatch and real-time operations centers. Drones will be launched because a call meets predefined criteria, not because someone happens to be available or enthusiastic. Pilots will increasingly function as system supervisors, ensuring compliance, safety, and continuity, rather than as hands-on operators for every flight.

At the same time, restraint will become just as important as reach. Cities that succeed will be those that articulate, early and clearly, what drones are not for. By 2026, residents will expect drone programs to come with explicit boundaries: no routine patrols, no generalized surveillance, no silent expansion of mission. Programs that fail to define those limits will find themselves stalled, regardless of how capable the technology may be.

Federal airspace rules and urban complexity will further separate casual programs from durable ones. Large cities will discover that scaling drones is less about buying more aircraft and more about solving coordination problems—airspace, redundancy, automation, and integration with other systems. The cities that work through those constraints will not just fly more often; they will fly predictably and defensibly.

And then, gradually, the attention will drift away.

When a drone arriving overhead is no longer remarkable—when it is simply understood as one of the first tools a city sends to make sense of an uncertain situation—the transition will be complete. The public will not notice drones because they will no longer symbolize change. They will symbolize continuity.

That is the destination Texas municipalities are approaching: not a future where drones dominate public safety, but one where they quietly support it—reducing uncertainty, improving judgment, and often preventing escalation precisely because they arrive early and ask the simplest question first: What is really happening here?

By 2026, the most advanced drone programs in Texas will not feel futuristic at all. They will feel inevitable.

The Modern Financial & General Analyst’s Core Skill Set

Excel, SQL Server, Power BI — With AI Doing the Heavy Lifting

A collaboration between Lewis McLain & AI

Introduction: The Skill That Now Matters Most

The most important analytical skill today is no longer memorizing syntax, mastering a single tool, or becoming a narrow specialist.

The must-have skill is knowing how to direct intelligence.

In practice, that means combining:

  • Excel for thinking, modeling, and scenarios
  • SQL Server for structure, scale, and truth
  • Power BI for communication and decision-making
  • AI as the teacher, coder, documenter, and debugger

This is not about replacing people with AI.
It is about finally separating what humans are best at from what machines are best at—and letting each do their job.


1. Stop Explaining. Start Supplying.

One of the biggest mistakes people make with AI is trying to explain complex systems to it in conversation.

That is backward.

The Better Approach

If your organization has:

  • an 80-page budget manual
  • a cost allocation policy
  • a grant compliance guide
  • a financial procedures handbook
  • even the City Charter

Do not summarize it for AI.
Give AI the document.

Then say:

“Read this entire manual. Summarize it back to me in 3–5 pages so I can confirm your understanding.”

This is where AI excels.

AI is extraordinarily good at:

  • absorbing long, dense documents
  • identifying structure and hierarchy
  • extracting rules, exceptions, and dependencies
  • restating complex material in plain language

Once AI demonstrates understanding, you can say:

“Assume this manual governs how we budget. Based on that understanding, design a new feature that…”

From that point on, AI is no longer guessing.
It is operating within your rules.

This is the fundamental shift:

  • Humans provide authoritative context
  • AI provides execution, extension, and suggested next steps

You will see this principle repeated throughout this post and the appendices—because everything else builds on it.


2. The Stack Still Matters (But for Different Reasons Now)

AI does not eliminate the need for Excel, SQL Server, or Power BI.
It makes them far more powerful—and far more accessible.


Excel — The Thinking and Scenario Environment

Excel remains the fastest way to:

  • test ideas
  • explore “what if” questions
  • model scenarios
  • communicate assumptions clearly

What has changed is not Excel—it is the burden placed on the human.

You no longer need to:

  • remember every formula
  • write VBA macros from scratch
  • search forums for error messages

AI already understands:

  • Excel formulas
  • Power Query
  • VBA (Visual Basic for Applications, Excel’s automation language)

You can say:

“Write an Excel model with inputs, calculations, and outputs for this scenario.”

AI will:

  • generate the formulas
  • structure the workbook cleanly
  • comment the logic
  • explain how it works

If something breaks:

  • AI reads the error message
  • explains why it occurred
  • fixes the formula or macro

Excel becomes what it was always meant to be:
a thinking space, not a memory test.


SQL Server — The System of Record and Truth

SQL Server is where analysis becomes reliable, repeatable, and scalable.

It holds:

  • historical data (millions of records are routine)
  • structured dimensions
  • consistent definitions
  • auditable transformations

Here is the shift AI enables:

You do not need to be a syntax expert.

SQL (Structured Query Language) is something AI already understands deeply.

You can say:

“Create a SQL view that allocates indirect costs by service hours. Include validation queries.”

AI will:

  • write the SQL
  • optimize joins
  • add comments
  • generate test queries
  • flag edge cases
  • produce clear documentation

AI can also interpret SQL Server error messages, explain them in plain English, and rewrite the code correctly.

This removes one of the biggest barriers between finance and data systems.

SQL stops being “IT-only” and becomes a shared analytical language, with AI translating analytical intent into executable code.


Power BI — Where Decisions Happen

Power BI is the communication layer: dashboards, trends, drilldowns, and monitoring.

It relies on DAX (Data Analysis Expressions), the calculation language used by Power BI.

Here is the key reassurance:

AI already understands DAX extremely well.

DAX is:

  • rule-based
  • pattern-driven
  • language-like

This makes it ideal for AI assistance.

You do not need to memorize DAX syntax.
You need to describe what you want.

For example:

“I want year-over-year change, rolling 12-month averages, and per-capita measures that respect slicers.”

AI can:

  • write the measures
  • explain filter context
  • fix common mistakes
  • refactor slow logic
  • document what each measure does

Power BI becomes less about struggling with formulas and more about designing the right questions.


3. AI as the Documentation Engine (Quietly Transformational)

Documentation is where most analytical systems decay.

  • Excel models with no explanation
  • SQL views nobody understands
  • Macros written years ago by someone who left
  • Reports that “work” but cannot be trusted

AI changes this completely.

SQL Documentation

AI can:

  • add inline comments to SQL queries
  • write plain-English descriptions of each view
  • explain table relationships
  • generate data dictionaries automatically

You can say:

“Document this SQL view so a new analyst understands it.”

And receive:

  • a clear narrative
  • assumptions spelled out
  • warnings about common mistakes

Excel & Macro Documentation

AI can:

  • explain what each worksheet does
  • document VBA macros line-by-line
  • generate user instructions
  • rewrite messy macros into cleaner, documented code

Recently, I had a powerful but stodgy Excel workbook with over 1.4 million formulas.
AI read the entire file, explained the internal logic accurately, and rewrote the system in SQL with a few hundred well-documented lines—producing identical results.

Documentation stops being an afterthought.
It becomes cheap, fast, and automatic.


4. AI as Debugger and Interpreter

One of AI’s most underrated strengths is error interpretation.

AI excels at:

  • reading cryptic error messages
  • identifying likely causes
  • suggesting fixes
  • explaining failures in plain language

You can copy-paste an error message without comment and say:

“Explain this error and fix the code.”

This applies to:

  • Excel formulas
  • VBA macros
  • SQL queries
  • Power BI refresh errors
  • DAX logic problems

Hours of frustration collapse into minutes.


5. What Humans Still Must Do (And Always Will)

AI is powerful—but it is not responsible for outcomes.

Humans must still:

  • define what words mean (“cost,” “revenue,” “allocation”)
  • set policy boundaries
  • decide what is reasonable
  • validate results
  • interpret implications
  • make decisions

The human role becomes:

  • director
  • creator
  • editor
  • judge
  • translator

AI does not replace judgment.
It amplifies disciplined judgment.


6. Why This Matters Across the Organization

For Managers

  • Faster insight
  • Clearer explanations
  • Fewer “mystery numbers”
  • Greater confidence in decisions

For Finance Professionals

  • Less time fighting tools
  • More time on policy, tradeoffs, and risk
  • Stronger documentation and audit readiness

For IT Professionals

  • Cleaner specifications
  • Fewer misunderstandings
  • Better separation of logic and presentation
  • More maintainable systems

This is not a turf shift.
It is a clarity shift.


7. The Real Skill Shift

The modern analyst does not need to:

  • memorize every function
  • master every syntax rule
  • become a full-time programmer

The modern analyst must:

  • ask clear questions
  • supply authoritative context
  • define constraints
  • validate outputs
  • communicate meaning

AI handles the rest.


Conclusion: Intelligence, Directed

Excel, SQL Server, and Power BI remain the backbone of serious analysis—not because they are trendy, but because they mirror how thinking, systems, and decisions actually work.

AI changes how we use them:

  • it reads the manuals
  • writes the code
  • documents the logic
  • fixes the errors
  • explains the results

Humans provide direction.
AI provides execution.

Those who learn to work this way will not just be more efficient—they will be more credible, more influential, and more future-proof.


Appendix A

A Practical AI Prompt Library for Finance, Government, and Analytical Professionals

This appendix is meant to be used, not admired.

These prompts reflect how professionals actually work: with rules, constraints, audits, deadlines, and political consequences.

You are not asking AI to “be smart.”
You are directing intelligence.


A.1 Foundational “Read & Confirm” Prompts (Critical)

Use these first. Always.

Prompt

“Read the attached document in full. Treat it as authoritative. Summarize the structure, rules, definitions, exceptions, and dependencies. Do not add assumptions. I will confirm your understanding.”

Why this matters

  • Eliminates guessing
  • Aligns AI with your institutional reality
  • Prevents hallucinated rules

A.2 Excel Modeling Prompts

Scenario Model

“Design an Excel workbook with Inputs, Calculations, and Outputs tabs. Use named ranges. Include scenario toggles and validation checks that confirm totals tie out.”

Formula Debugging

“This Excel formula returns an error. Explain why, fix it, and rewrite it in a clearer form.”

Macro Creation

“Write a VBA macro that refreshes all data connections, recalculates, logs a timestamp, and alerts the user if validation checks fail. Comment every section.”

Documentation

“Explain this Excel workbook as if onboarding a new analyst. Describe what each worksheet does and how inputs flow to outputs.”


A.3 SQL Server Prompts

View Creation

“Create a SQL view that produces monthly totals by City and Department. Grain must be City-Month-Department. Exclude void transactions. Add comments and validation queries.”

Performance Refactor

“Refactor this SQL query for performance without changing results. Explain what you changed and why.”

Error Interpretation

“Here is a SQL Server error message. Explain it in plain English and fix the query.”

Documentation

“Document this SQL schema so a new analyst understands table purpose, keys, and relationships.”


A.4 Power BI / DAX Prompts

(DAX = Data Analysis Expressions, the calculation language used by Power BI — a language AI already understands deeply.)

Measure Creation

“Create DAX measures for Total Cost, Cost per Capita, Year-over-Year Change, and Rolling 12-Month Average. Explain filter context for each.”

Debugging

“This DAX measure returns incorrect results when filtered. Explain why and correct it.”

Model Review

“Review this Power BI data model and identify risks: ambiguous relationships, missing dimensions, or inconsistent grain.”


A.5 Validation & Audit Prompts

Validation Suite

“Create validation queries that confirm totals tie to source systems and flag variances greater than 0.1%.”

Audit Explanation

“Explain how this model produces its final numbers in language suitable for auditors.”


A.6 Training & Handoff Prompts

Training Guide

“Create a training guide for an internal analyst explaining how to refresh, validate, and extend this model safely.”

Institutional Memory

“Write a ‘how this system thinks’ document explaining design philosophy, assumptions, and known limitations.”


Key Principle

Good prompts don’t ask for brilliance.
They provide clarity.


Appendix B

How to Validate AI-Generated Analysis Without Becoming Paranoid

AI does not eliminate validation.
It raises the bar for it.

The danger is not trusting AI too much.
The danger is trusting anything without discipline.


B.1 The Rule of Independent Confirmation

Every important number must:

  • tie to a known source, or
  • be independently recomputable

If it cannot be independently confirmed, it is not final.


B.2 Validation Layers (Use All of Them)

Layer 1 — Structural Validation

  • Correct grain (monthly vs annual)
  • No duplicate keys
  • Expected row counts

Layer 2 — Arithmetic Validation

  • Subtotals equal totals
  • Allocations sum to 100%
  • No unexplained residuals

Layer 3 — Reconciliation

  • Ties to GL, ACFR, payroll, ridership, etc.
  • Same totals across tools (Excel, SQL, Power BI)

Layer 4 — Reasonableness Tests

  • Per-capita values plausible?
  • Sudden jumps explainable?
  • Trends consistent with known events?

AI can help generate all four layers, but humans must decide what “reasonable” means.


B.3 The “Explain It Back” Test

One of the strongest validation techniques:

“Explain how this number was produced step by step.”

If the explanation:

  • is coherent
  • references known rules
  • matches expectations

You’re on solid ground.

If not, stop.


B.4 Change Detection

Always compare:

  • this month vs last month
  • current version vs prior version

Ask AI:

“Identify and explain every material change between these two outputs.”

This catches silent errors early.


B.5 What Validation Is Not

Validation is not:

  • blind trust
  • endless skepticism
  • redoing everything manually

Validation is structured confidence-building.


B.6 Why AI Helps Validation (Instead of Weakening It)

AI:

  • generates test queries quickly
  • explains failures clearly
  • documents expected behavior
  • flags anomalies humans may miss

AI doesn’t reduce rigor.
It makes rigor affordable.


Appendix C

What Managers Should Ask For — and What They Should Stop Asking For

This appendix is for leaders.

Good management questions produce good systems.
Bad questions produce busywork.


C.1 What Managers Should Ask For

“Show me the assumptions.”

If assumptions aren’t visible, the output isn’t trustworthy.


“How does this tie to official numbers?”

Every serious analysis must reconcile to something authoritative.


“What would change this conclusion?”

Good models reveal sensitivities, not just answers.


“How will this update next month?”

If refresh is manual or unclear, the model is fragile.


“Who can maintain this if you’re gone?”

This forces documentation and institutional ownership.


C.2 What Managers Should Stop Asking For

❌ “Just give me the number.”

Numbers without context are liabilities.


❌ “Can you do this quickly?”

Speed without clarity creates rework and mistrust.


❌ “Why can’t this be done in Excel?”

Excel is powerful—but it is not a system of record.


❌ “Can’t AI just do this automatically?”

AI accelerates work within rules.
It does not invent governance.


C.3 The Best Managerial Question of All

“How confident should I be in this, and why?”

That question invites:

  • validation
  • explanation
  • humility
  • trust

It turns analysis into leadership support instead of technical theater.


Appendix D

Job Description: The Modern Analyst (0–3 Years Experience)

This job description reflects what an effective, durable analyst looks like today — not a unicorn, not a senior architect, and not a narrow technician.

This role assumes the analyst will work in an environment that uses Excel, SQL Server, Power BI, and AI tools as part of normal operations.


Position Title

Data / Financial / Business Analyst
(Title may vary by organization)


Experience Level

  • Entry-level to 3 years of professional experience
  • Recent graduates encouraged to apply

Role Purpose

The Modern Analyst supports decision-making by:

  • transforming raw data into reliable information,
  • building repeatable analytical workflows,
  • documenting logic clearly,
  • and communicating results in ways leaders can trust.

This role is not about memorizing syntax or becoming a single-tool expert.
It is about directing analytical tools — including AI — with clarity, discipline, and judgment.


Core Responsibilities

1. Analytical Thinking & Problem Framing

  • Translate business questions into analytical tasks
  • Clarify assumptions, definitions, and scope before analysis begins
  • Identify what data is needed and where it comes from
  • Ask follow-up questions when requirements are ambiguous

2. Excel Modeling & Scenario Analysis

  • Build and maintain Excel models using:
    • structured layouts (inputs → calculations → outputs)
    • clear formulas and named ranges
    • validation checks and reconciliation totals
  • Use Excel for:
    • exploratory analysis
    • scenario testing
    • sensitivity analysis
  • Leverage AI tools to:
    • generate formulas
    • debug errors
    • document models

3. SQL Server Data Work

  • Query and analyze data stored in SQL Server
  • Create and maintain:
    • views
    • aggregation queries
    • validation checks
  • Understand concepts such as:
    • joins
    • grouping
    • grain (row-level meaning)
  • Use AI assistance to:
    • write SQL code
    • optimize queries
    • interpret error messages
    • document logic clearly

(Deep database administration is not required.)


4. Power BI Reporting & Analysis

  • Build and maintain Power BI reports and dashboards
  • Use existing semantic models and measures
  • Create new measures using DAX (Data Analysis Expressions) with AI guidance
  • Ensure reports:
    • align with defined metrics
    • update reliably
    • are understandable to non-technical users

5. Documentation & Knowledge Transfer

  • Document:
    • Excel models
    • SQL queries
    • Power BI reports
  • Write explanations that allow another analyst to:
    • understand the logic
    • reproduce results
    • maintain the system
  • Use AI to accelerate documentation while ensuring accuracy

6. Validation & Quality Control

  • Reconcile outputs to authoritative sources
  • Identify anomalies and unexplained changes
  • Use validation checks rather than assumptions
  • Explain confidence levels and limitations clearly

7. Collaboration & Communication

  • Work with:
    • finance
    • operations
    • IT
    • management
  • Present findings clearly in plain language
  • Respond constructively to questions and challenges
  • Accept feedback and revise analysis as needed

Required Skills & Competencies

Analytical & Professional Skills

  • Curiosity and skepticism
  • Attention to detail
  • Comfort asking clarifying questions
  • Willingness to document work
  • Ability to explain complex ideas simply

Technical Skills (Baseline)

  • Excel (intermediate level or higher)
  • Basic SQL (SELECT, JOIN, GROUP BY)
  • Familiarity with Power BI or similar BI tools
  • Comfort using AI tools for coding, explanation, and documentation

Candidates are not expected to know everything on day one.


Preferred Qualifications

  • Degree in:
    • Finance
    • Accounting
    • Economics
    • Data Analytics
    • Information Systems
    • Engineering
    • Public Administration
  • Internship or project experience involving data analysis
  • Exposure to:
    • budgeting
    • forecasting
    • cost allocation
    • operational metrics

What Success Looks Like (First 12–18 Months)

A successful analyst in this role will be able to:

  • independently build and explain Excel models
  • write and validate SQL queries with AI assistance
  • maintain Power BI reports without breaking definitions
  • document their work clearly
  • flag issues early rather than hiding uncertainty
  • earn trust by being transparent and disciplined

What This Role Is Not

This role is not:

  • a pure programmer role
  • a dashboard-only role
  • a “press the button” reporting job
  • a role that values speed over accuracy

Why This Role Matters

Organizations increasingly fail not because they lack data, but because:

  • logic is undocumented
  • assumptions are hidden
  • systems are fragile
  • knowledge walks out the door

This role exists to prevent that.


Closing Note to Candidates

You do not need to be an expert in every tool.

You do need to:

  • think clearly,
  • communicate honestly,
  • learn continuously,
  • and use AI responsibly.

If you can do that, the tools will follow.


Appendix E

Interview Questions a Strong Analyst Should Ask

(And Why the Answers Matter)

This appendix is written for candidates — especially early-career analysts — who want to succeed, grow, and contribute meaningfully.

These are not technical questions.
They are questions about whether the environment supports good analytical work.

A thoughtful organization will welcome these questions.
An uncomfortable response is itself an answer.


1. Will I Have Timely Access to the Data I’m Expected to Analyze?

Why this matters

Analysts fail more often from lack of access than lack of ability.

If key datasets (such as utility billing, payroll, permitting, or ridership data) require long approval chains, partial access, or repeated manual requests, analysis stalls. Long delays force analysts to restart work cold, which is inefficient and demoralizing.

A healthy environment has:

  • clear data access rules,
  • predictable turnaround times,
  • and documented data sources.

2. Will I Be Able to Work in Focused Blocks of Time?

Why this matters

Analytical work requires concentration and continuity.

If an analyst’s day is fragmented by:

  • constant meetings,
  • urgent ad-hoc requests,
  • unrelated administrative tasks,

then even talented analysts struggle to make progress. Repeated interruptions over days or weeks force constant re-learning and increase error risk.

Strong teams protect at least some uninterrupted time for deep work.


3. How Often Are Priorities Changed Once Work Has Started?

Why this matters

Changing priorities is normal. Constant resets are not.

Frequent shifts without closure:

  • waste effort,
  • erode confidence,
  • and prevent analysts from seeing work through to completion.

A good environment allows:

  • exploratory work,
  • followed by stabilization,
  • followed by delivery.

Analysts grow fastest when they can complete full analytical cycles.


4. Will I Be Asked to Do Significant Work Outside the Role You’re Hiring Me For?

Why this matters

Early-career analysts often fail because they are overloaded with tasks unrelated to analysis:

  • ad-hoc administrative work,
  • manual data entry,
  • report formatting unrelated to insights,
  • acting as an informal IT support desk.

This dilutes skill development and leads to frustration.

A strong role respects analytical focus while allowing reasonable cross-functional exposure.


5. Where Will This Role Sit Organizationally?

Why this matters

Analysts thrive when they are close to:

  • decision-makers,
  • subject-matter experts,
  • and the business context.

Being housed in IT can be appropriate in some organizations, but analysts often succeed best when:

  • they are embedded in finance, operations, or planning,
  • with strong, cooperative support from IT, not ownership by IT.

Clear role placement reduces confusion about expectations and priorities.


6. What Kind of Support Will I Have from IT?

Why this matters

Analysts do not need IT to do their work for them — but they do need:

  • help with access,
  • guidance on standards,
  • and assistance when systems issues arise.

A healthy environment has:

  • defined IT support pathways,
  • mutual respect between analysts and IT,
  • and shared goals around data quality and security.

Adversarial or unclear relationships slow everyone down.


7. Will I Be Encouraged to Document My Work — and Given Time to Do So?

Why this matters

Documentation is often praised but rarely protected.

If analysts are rewarded only for speed and output, documentation becomes the first casualty. This creates fragile systems and makes handoffs painful.

Strong organizations:

  • value documentation,
  • allow time for it,
  • and recognize it as part of the job, not overhead.

8. How Will Success Be Measured in the First Year?

Why this matters

Vague success criteria create anxiety and misalignment.

A healthy answer includes:

  • skill development,
  • reliability,
  • learning the organization’s data,
  • and increasing independence over time.

Early-career analysts need space to learn without fear of being labeled “slow.”


9. What Happens When Data or Assumptions Are Unclear?

Why this matters

No dataset is perfect.

Analysts succeed when:

  • questions are welcomed,
  • assumptions are discussed openly,
  • and uncertainty is handled professionally.

An environment that discourages questions or punishes transparency leads to quiet errors and loss of trust.


10. Will I Be Allowed — and Encouraged — to Use Modern Tools Responsibly?

Why this matters

Analysts today learn and work using tools like:

  • Excel,
  • SQL,
  • Power BI,
  • and AI-assisted analysis.

If these tools are discouraged, restricted without explanation, or treated with suspicion, analysts are forced into inefficient workflows. In many cases, the latest versions with added features can prove better productivity. Is the organization more than 1-2 years behind in updating at the present time? What are the views of key players about AI?

Strong organizations focus on:

  • governance,
  • validation,
  • and responsible use — not blanket prohibition.

11. How Are Analytical Mistakes Handled?

Why this matters

Mistakes happen — especially while learning.

The question is whether the culture responds with:

  • learning and correction, or
  • blame and fear.

Analysts grow fastest in environments where:

  • mistakes are surfaced early,
  • corrected openly,
  • and used to improve systems.

12. Who Will I Learn From?

Why this matters

Early-career analysts need:

  • examples,
  • feedback,
  • and mentorship.

Even informal guidance matters.

A thoughtful answer shows the organization understands that analysts are developed, not simply hired.


Closing Note to Candidates

These questions are not confrontational.
They are professional.

Organizations that welcome them are more likely to:

  • retain talent,
  • produce reliable analysis,
  • and build durable systems.

If an organization cannot answer these questions clearly, it does not mean it is a bad place — but it may not yet be a good place for an analyst to thrive.


Appendix F

A Necessary Truce: IT Control, Analyst Access, and the Role of Sandboxes

One of the most common — and understandable — tensions in modern organizations sits at the boundary between IT and analytical staff.

It usually sounds like this:

“We can’t let anyone outside IT touch live databases.”

On this point, IT is absolutely right.

Production systems exist to:

  • run payroll,
  • bill customers,
  • issue checks,
  • post transactions,
  • and protect sensitive information.

They must be:

  • stable,
  • secure,
  • auditable,
  • and minimally disturbed.

No serious analyst disputes this.

But here is the equally important follow-up question — one that often goes unspoken:

If analysts cannot access live systems, do they have access to a safe, current analytical environment instead?


Production Is Not the Same Thing as Analysis

The core misunderstanding is not about permission.
It is about purpose.

  • Production systems are built to execute transactions correctly.
  • Analytical systems are built to understand what happened.

These are different jobs, and they should live in different places.

IT departments already understand this distinction in principle. The question is whether it has been implemented in practice.


The Case for Sandboxes and Analytical Mirrors

A well-run organization does not give analysts access to live transactional tables.

Instead, it provides:

  • read-only mirrors
  • overnight refreshes at a minimum
  • restricted, de-identified datasets
  • clearly defined analytical schemas

This is not radical.
It is standard practice in mature organizations.

What a Sandbox Actually Is

A sandbox is:

  • a copy of production data,
  • refreshed on a schedule (often nightly),
  • isolated from operational systems,
  • and safe to explore without risk.

Analysts can:

  • query freely,
  • build models,
  • validate logic,
  • and document findings

…without the possibility of disrupting operations.


A Practical Example: Payroll and Personnel Data

Payroll is often cited as the most sensitive system — and rightly so.

But here is the practical reality:

Most analytical work does not require:

  • Social Security numbers
  • bank account details
  • wage garnishments
  • benefit elections
  • direct deposit instructions

What analysts do need are things like:

  • position counts
  • departments
  • job classifications
  • pay grades
  • hours worked
  • overtime
  • trends over time

A Payroll / Personnel sandbox can be created that:

  • mirrors the real payroll tables,
  • strips or masks protected fields,
  • replaces SSNs with surrogate keys,
  • removes fields irrelevant to analysis,
  • refreshes nightly from production

This allows analysts to answer questions such as:

  • How is staffing changing?
  • Where is overtime increasing?
  • What are vacancy trends?
  • How do personnel costs vary by department or function?

All without exposing sensitive personal data.

This is not a compromise of security.
It is an application of data minimization, a core security principle.


Why This Matters More Than IT Realizes

When analysts lack access to safe, current analytical data, several predictable failures occur:

  • Analysts rely on stale exports
  • Logic is rebuilt repeatedly from scratch
  • Results drift from official numbers
  • Trust erodes between departments
  • Decision-makers get inconsistent answers

Ironically, over-restriction often increases risk, because:

  • people copy data locally,
  • spreadsheets proliferate,
  • and controls disappear entirely.

A well-designed sandbox reduces risk by centralizing access under governance.


What IT Is Right to Insist On

IT is correct to insist on:

  • no write access
  • no direct production access
  • strong role-based security
  • auditing and logging
  • clear ownership of schemas
  • documented refresh processes

None of that is negotiable.

But those safeguards are fully compatible with analyst access — if access is provided in the right environment.


What Analysts Are Reasonably Asking For

Analysts are not asking to:

  • run UPDATE statements on live tables
  • bypass security controls
  • access protected personal data
  • manage infrastructure

They are asking for:

  • timely access to analytical copies of data
  • predictable refresh schedules
  • stable schemas
  • and the ability to do their job without constant resets

That is a governance problem, not a personnel problem.


The Ideal Operating Model

In a healthy organization:

  • IT owns production systems
  • IT builds and governs analytical mirrors
  • Analysts work in sandboxes
  • Finance and operations define meaning
  • Validation ties analysis back to production totals
  • Everyone wins

This model:

  • protects systems,
  • protects data,
  • supports analysis,
  • and builds trust.

Why This Belongs in This Series

Earlier appendices described:

  • the skills of the modern analyst,
  • the questions analysts should ask,
  • and the environments that cause analysts to fail or succeed.

This appendix addresses a core environmental reality:

Analysts cannot succeed without access — and access does not require risk.

The solution is not fewer analysts or tighter gates.
The solution is better separation between production and analysis.


A Final Word to IT, Finance, and Leadership

This is not an argument against IT control.

It is an argument for IT leadership.

The most effective IT departments are not those that say “no” most often —
they are the ones that say:

“Here is the safe way to do this.”

Sandboxes, data warehouses, and analytical mirrors are not luxuries.
They are the infrastructure that allows modern organizations to think clearly without breaking what already works.

Closing Note on the Appendices

These appendices complete the framework:

  • The main essay explains the stack
  • The follow-up explains how to direct AI
  • These appendices make it operational

Together, they describe not just how to use AI—but how to use it responsibly, professionally, and durably.

Population as the Primary and Predictable Driver of Local Government Forecasting

A collaboration between Lewis McLain & AI

A technical framework for staffing, facilities, and cost projection

Abstract

In local government forecasting, population is the dominant driver of service demand, staffing requirements, facility needs, and operating costs. While no municipal system can be forecast with perfect precision, population-based models—when properly structured—produce estimates that are sufficiently accurate for planning, budgeting, and capital decision-making. Crucially, population growth in cities is not a sudden or unknowable event.

Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, population arrival is observable months or years in advance. This paper presents population not merely as a driver, but as a leading indicator, and demonstrates how cities can convert development approvals into staged population forecasts that support rational staffing, facility sizing, capital investment, and operating cost projections.


1. Introduction: Why population sits at the center

Local governments exist to provide services to people. Police protection, fire response, streets, parks, water, sanitation, administration, and regulatory oversight are all mechanisms for supporting a resident population and the activity it generates. While policy choices and service standards influence how services are delivered, the volume of demand originates with population.

Practitioners often summarize this reality informally:

“Tell me the population, and I can tell you roughly how many police officers you need.
If I know the staff, I can estimate the size of the building.
If I know the size, I can estimate the construction cost.
If I know the size, I can estimate the electricity bill.”

This paper formalizes that intuition into a defensible forecasting framework and addresses a critical objection: population is often treated as uncertain or unknowable. In practice, population growth in cities is neither sudden nor mysterious—it is permitted into existence through public processes that unfold over years.


2. Population as a base driver, not a single-variable shortcut

Population does not explain every budget line, but it explains most recurring demand when paired with a small number of modifiers.

At its core, many municipal services follow this structure:

Total Demand=α+β⋅Population

Where:

  • α (fixed minimum) represents baseline capacity required regardless of size (minimum staffing, governance, 24/7 coverage).
  • β (variable component) represents incremental demand generated by each additional resident.

This structure explains why:

  • Small cities appear “overstaffed” per capita (fixed minimum dominates).
  • Mid-sized and large cities stabilize into predictable staffing ratios.
  • Growth pressures emerge when population increases faster than capacity adjustments.

Population therefore functions as the load variable of local government, analogous to demand in utility planning.


3. Why population reliably predicts service demand

3.1 People generate transactions

Residents generate:

  • Calls for service
  • Utility usage
  • Permits and inspections
  • Court activity
  • Recreation participation
  • Library circulation
  • Administrative transactions (HR, payroll, finance, IT)

While individual events vary, aggregate demand scales with population.

3.2 Capacity, not consumption, drives budgets

Municipal budgets fund capacity, not just usage:

  • Staff must be available before calls occur
  • Facilities must exist before staff are hired
  • Vehicles and equipment must be in place before service delivery

Capacity decisions are inherently population-driven.


4. Population growth is observable before it arrives

A defining feature of local government forecasting—often underappreciated—is that population growth is authorized through public approvals long before residents appear in census or utility data.

Population does not “arrive”; it progresses through a pipeline.


5. The development pipeline as a population forecasting timeline

5.1 Annexation: strategic intent (years out)

Annexation establishes:

  • Jurisdictional responsibility
  • Long-term service obligations
  • Future land-use authority

While annexation does not create immediate population, it signals where population will eventually be allowed.

Forecast role:

  • Long-range horizon marker
  • Infrastructure and service envelope planning
  • Typical lead time: 3–10 years

5.2 Zoning: maximum theoretical population

Zoning converts land into entitled density.

From zoning alone, cities can estimate:

  • Maximum dwelling units
  • Maximum population at buildout
  • Long-run service ceilings

Zoning defines upper bounds, even if timing is uncertain.

Forecast role:

  • Long-range capacity planning
  • Useful for master plans and utility sizing
  • Typical lead time: 3–7 years

5.3 Preliminary plat: credible development intent

Preliminary plat approval signals:

  • Developer capital commitment
  • Defined lot counts
  • Identified phasing

Population estimates become quantifiable, even if delivery timing varies.

Forecast role:

  • Medium-high certainty population
  • First stage for phased population modeling
  • Typical lead time: 1–3 years

5.4 Final plat: scheduled population

Final plat approval:

  • Legally creates lots
  • Locks in density and configuration
  • Triggers infrastructure construction
  • Impact Fees & other costs are committed

At this point, population arrival is no longer speculative.

Forecast role:

  • High-confidence population forecasting
  • Suitable for annual budget and staffing models
  • Typical lead time: 6–24 months

5.5 Infrastructure construction: timing constraints

Once streets, utilities, and drainage are built, population arrival becomes physically constrained by construction schedules.

Forecast role:

  • Narrow timing window
  • Supports staffing lead-time decisions
  • Typical lead time: 6–18 months

5.6 Water meter connections: imminent occupancy

Water meters are one of the most reliable near-term indicators:

  • Each residential meter ≈ one household
  • Installations closely precede vertical construction

Forecast role:

  • Quarterly or monthly population forecasting
  • Just-in-time operational scaling
  • Typical lead time: 1–6 months

5.7 Certificates of Occupancy: population realized

Certificates of occupancy convert permitted population into actual population.

At this point:

  • Service demand begins immediately
  • Utility consumption appears
  • Forecasts can be validated

Forecast role:

  • Confirmation and calibration
  • Not prediction

6. Population forecasting as a confidence ladder

Development StagePopulation CertaintyTiming PrecisionPlanning Use
AnnexationLowVery lowStrategic
ZoningLow–MediumLowCapacity envelopes
Preliminary PlatMediumMediumPhased planning
Final PlatHighMedium–HighBudget & staffing
Infrastructure BuiltVery HighHighOperational prep
Water MetersExtremely HighVery HighNear-term ops
COsCertainExactValidation

Population forecasting in cities is therefore graduated, not binary.


7. From population to staffing

Once population arrival is staged, staffing can be forecast using service-specific ratios and fixed minimums.

7.1 Police example (illustrative ranges)

Sworn officers per 1,000 residents commonly stabilize within broad bands depending on service level and demand, also tied to known local ratios:

  • Lower demand: ~1.2–1.8
  • Moderate demand: ~1.8–2.4
  • High demand: ~2.4–3.5+

Civilian support staff often scale as a fraction of sworn staffing.

The appropriate structure is:Officers=αpolice+βpolicePopulationOfficers = \alpha_{police} + \beta_{police} \cdot PopulationOfficers=αpolice​+βpolice​⋅Population

Where α accounts for minimum 24/7 coverage and supervision.


7.2 General government staffing

Administrative staffing scales with:

  • Population
  • Number of employees
  • Asset inventory
  • Transaction volume

A fixed core plus incremental per-capita growth captures this reality more accurately than pure ratios.


8. From staffing to facilities

Facilities are a function of:

  • Headcount
  • Service configuration
  • Security and public access needs

A practical planning method:Facility Size=FTEGross SF per FTEFacility\ Size = FTE \cdot Gross\ SF\ per\ FTEFacility Size=FTE⋅Gross SF per FTE

Typical blended civic office planning ranges usually fall within:

  • ~175–300 gross SF per employee

Specialized spaces (dispatch, evidence, fleet, courts) are layered on separately.


9. From facilities to capital and operating costs

9.1 Capital costs

Capital expansion costs are typically modeled as:Capex=Added SFCost per SF(1+Soft Costs)Capex = Added\ SF \cdot Cost\ per\ SF \cdot (1 + Soft\ Costs)Capex=Added SF⋅Cost per SF⋅(1+Soft Costs)

Where soft costs include design, permitting, contingencies, and escalation.


9.2 Operating costs

Facility operating costs scale predictably with size:

  • Electricity: kWh per SF per year
  • Maintenance: % of replacement value or $/SF
  • Custodial: $/SF
  • Lifecycle renewals

Electricity alone can be reasonably estimated as:Annual Cost=SFkWh/SF$/kWhAnnual\ Cost = SF \cdot kWh/SF \cdot \$/kWhAnnual Cost=SF⋅kWh/SF⋅$/kWh

This is rarely exact—but it is directionally reliable.


10. Key modifiers that refine population models

Population alone is powerful but incomplete. High-quality forecasts adjust for:

  • Density and land use
  • Daytime population and employment
  • Demographics
  • Service standards
  • Productivity and technology
  • Geographic scale (lane miles, acres)

These modifiers refine, but do not replace, population as the base driver.


11. Why growth surprises cities anyway

When cities claim growth was “unexpected,” the issue is rarely lack of information. More often:

  • Development signals were not integrated into finance models
  • Staffing and capital planning lagged approvals
  • Fixed minimums were ignored
  • Threshold effects (new stations, expansions) were deferred too long

Growth that appears sudden is usually forecastable growth that was not operationalized.


12. Conclusion

Population is the primary driver of local government demand, but more importantly, it is a predictable driver. Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, cities possess a multi-year advance view of population arrival.

This makes it possible to:

  • Phase staffing rationally
  • Time facilities before overload
  • Align capital investment with demand
  • Improve credibility with councils, auditors, and rating agencies

In local government, population growth is not a surprise. It is a permitted, engineered, and scheduled outcome of public decisions. A forecasting system that treats population as both a driver and a leading indicator is not speculative—it is simply paying attention to the city’s own approvals.


Appendix A

Defensibility of Population-Driven Forecasting Models

A response framework for auditors, rating agencies, and governing bodies

Purpose of this appendix

This appendix addresses a common concern raised during budget reviews, audits, bond disclosures, and council deliberations:

“Population-based forecasts seem too simplistic or speculative.”

The purpose here is not to argue that population is the only factor affecting local government costs, but to demonstrate that population-driven forecasting—when anchored to development approvals and adjusted for service standards—is methodologically sound, observable, and conservative.


A.1 Population forecasting is not speculative in local government

A frequent misconception is that population forecasts rely on demographic projections or external estimates. In practice, this model relies primarily on the city’s own legally binding approvals.

Population growth enters the forecast only after it has passed through:

  • Annexation agreements
  • Zoning entitlements
  • Preliminary and final plats
  • Infrastructure construction
  • Utility connections
  • Certificates of occupancy

These are public, documented actions, not assumptions.

Key distinction for reviewers:
This model does not ask “How fast might the city grow?”
It asks “What growth has the city already approved, and when will it become occupied?”


A.2 Population is treated as a leading indicator, not a lagging one

Traditional population measures (census counts, ACS estimates) are lagging indicators. This model explicitly avoids relying on those for near-term forecasting.

Instead, it uses development milestones as leading indicators, each with increasing certainty and narrower timing windows.

For audit and disclosure purposes:

  • Early-stage entitlements affect only long-range capacity planning
  • Staffing and capital decisions are triggered only at later, high-certainty stages
  • Near-term operating impacts are tied to utility connections and COs

This layered approach prevents premature spending while avoiding reactive under-staffing.


A.3 Fixed minimums prevent over-projection in small or slow-growth cities

A common audit concern is that per-capita models overstate staffing needs.

This model explicitly separates:

  • Fixed baseline capacity (α)
  • Incremental population-driven capacity (β)

This structure:

  • Prevents unrealistic staffing increases in early growth stages
  • Accurately reflects real-world minimum staffing requirements
  • Explains why per-capita ratios vary by city size

Auditors should note that this approach is more conservative than straight-line per-capita extrapolation.


A.4 Service standards are explicit policy inputs, not hidden assumptions

Population does not automatically dictate staffing levels. Staffing reflects policy decisions.

This model requires the city to explicitly state:

  • Response time targets
  • Service frequency goals
  • Coverage expectations
  • Hours of operation

As a result:

  • Changes in staffing can be clearly attributed to either population growth or policy change
  • Council decisions are transparently reflected in forecasts
  • The model separates “growth pressure” from “service enhancements or reductions”

This clarity improves accountability rather than obscuring it.


A.5 Facilities and capital projections follow staffing, not speculation

Another concern raised by reviewers is that population forecasts may be used to justify premature capital expansion.

This model deliberately enforces a sequencing discipline:

  1. Population approvals observed
  2. Staffing thresholds reached
  3. Facility capacity constraints identified
  4. Capital expansion triggered

Facilities are not expanded because population might grow, but because staffing—already justified by approved growth—can no longer be accommodated.

This mirrors best practices in asset management and avoids front-loading debt.


A.6 Operating cost estimates use industry-standard unit costs

Electricity, maintenance, custodial, and lifecycle costs are estimated using:

  • Per-square-foot benchmarks
  • Historical city utility data where available
  • Conservative unit assumptions

These are not novel or experimental methods. They are the same unit-cost techniques commonly used in:

  • CIP planning
  • Facility condition assessments
  • Energy benchmarking
  • Budget impact statements

Auditors should view these estimates as planning magnitudes, not precise bills—and that distinction is explicitly stated in the model documentation.


A.7 The model is testable and falsifiable

A major strength of this approach is that it can be validated against actual outcomes.

As certificates of occupancy are issued:

  • Actual population arrival can be compared to forecasts
  • Staffing changes can be reconciled
  • Utility consumption can be measured

This allows:

  • Annual recalibration
  • Error tracking
  • Continuous improvement

Models that can be tested and corrected are inherently more defensible than opaque judgment-based forecasts.


A.8 Why this approach aligns with rating-agency expectations

Bond rating agencies consistently emphasize:

  • Predictability
  • Governance discipline
  • Forward planning
  • Avoidance of reactive financial decisions

This framework demonstrates:

  • Awareness of growth pressures well in advance
  • Phased responses rather than abrupt spending
  • Clear linkage between approvals, staffing, and capital
  • Conservative treatment of uncertainty

As such, population-driven forecasting anchored to development approvals should be viewed as a credit positive, not a risk.


A.9 Summary for reviewers

For audit, disclosure, and governance purposes, the following conclusions are reasonable:

  1. Population growth in cities is observable years in advance through public approvals.
  2. Using approved development as a population driver is evidence-based, not speculative.
  3. Fixed minimums and service-level inputs prevent mechanical over-projection.
  4. Staffing precedes facilities; facilities precede capital.
  5. Operating costs scale predictably with assets and space.
  6. The model is transparent, testable, and adjustable.

Therefore:
A population-driven forecasting model of this type represents a prudent, defensible, and professionally reasonable approach to long-range municipal planning.


Appendix B

Consequences of Failing to Anticipate Population Growth

A diagnostic review of reactive municipal planning

Purpose of this appendix

This appendix describes common failure patterns observed in cities that do not systematically link development approvals to population, staffing, and facility planning. These outcomes are not the result of negligence or bad intent; they typically arise from fragmented information, short planning horizons, or the absence of an integrated forecasting framework.

The patterns described below are widely recognized in municipal practice and are offered to illustrate the practical risks of reactive planning.


B.1 “Surprise growth” that was not actually a surprise

A frequent narrative in reactive cities is that growth “arrived suddenly.” In most cases, the growth was visible years earlier through zoning approvals, plats, or utility extensions but was not translated into staffing or capital plans.

Common indicators:

  • Approved subdivisions not reflected in operating forecasts
  • Development tracked only by planning staff, not finance or operations
  • Population discussed only after occupancy

Consequences:

  • Budget shocks
  • Emergency staffing requests
  • Loss of credibility with governing bodies

B.2 Knee-jerk staffing reactions

When growth impacts become unavoidable, reactive cities often respond through hurried staffing actions.

Typical symptoms:

  • Mid-year supplemental staffing requests
  • Heavy reliance on overtime
  • Accelerated hiring without workforce planning
  • Training pipelines overwhelmed

Consequences:

  • Elevated labor costs
  • Increased burnout and turnover
  • Declining service quality during growth periods
  • Inefficient long-term staffing structures

B.3 Under-sizing followed by over-correction

Without forward planning, cities often alternate between two extremes:

  1. Under-sizing due to conservative or delayed response
  2. Over-sizing in reaction to service breakdowns

Examples:

  • Facilities built too small “to be safe”
  • Rapid expansions shortly after completion
  • Swing from staffing shortages to excess capacity

Consequences:

  • Higher lifecycle costs
  • Poor space utilization
  • Perception of waste or mismanagement

B.4 Obsolete facilities at the moment of completion

Facilities planned without reference to future population often open already constrained.

Common causes:

  • Planning based on current headcount only
  • Ignoring entitled but unoccupied development
  • Failure to include expansion capability

Consequences:

  • Expensive retrofits
  • Disrupted operations during expansion
  • Shortened facility useful life

This is one of the most costly errors because capital investments are long-lived and difficult to correct.


B.5 Deferred capital followed by crisis-driven spending

Reactive cities often delay capital investment until systems fail visibly.

Typical patterns:

  • Fire stations added only after response times degrade
  • Police facilities expanded only after overcrowding
  • Utilities upgraded only after service complaints

Consequences:

  • Emergency procurement
  • Higher construction costs
  • Increased debt stress
  • Lost opportunity for phased financing

B.6 Misalignment between departments

When population intelligence is not shared across departments:

  • Planning knows what is coming
  • Finance budgets based on current year
  • Operations discover impacts last

Consequences:

  • Conflicting narratives to council
  • Fragmented decision-making
  • Reduced trust between departments

Population-driven forecasting provides a common factual baseline.


B.7 Overreliance on lagging indicators

Reactive cities often rely heavily on:

  • Census updates
  • Utility consumption after occupancy
  • Service call increases

These indicators confirm growth after it has already strained capacity.

Consequences:

  • Persistent lag between demand and response
  • Structural understaffing
  • Continual “catch-up” budgeting

B.8 Political whiplash and credibility erosion

Unanticipated growth pressures often force councils into repeated difficult votes:

  • Emergency funding requests
  • Mid-year budget amendments
  • Rapid debt authorizations

Over time, this leads to:

  • Voter skepticism
  • Council fatigue
  • Reduced tolerance for legitimate future investments

Planning failures become governance failures.


B.9 Inefficient use of taxpayer dollars

Ironically, reactive planning often costs more, not less.

Cost drivers include:

  • Overtime premiums
  • Compressed construction schedules
  • Retrofit and rework costs
  • Higher borrowing costs due to rushed timing

Proactive planning spreads costs over time and reduces risk premiums.


B.10 Organizational stress and morale impacts

Staff experience growth pressures first.

Observed impacts:

  • Chronic overtime
  • Inadequate workspace
  • Equipment shortages
  • Frustration with leadership responsiveness

Over time, this contributes to:

  • Higher turnover
  • Loss of institutional knowledge
  • Reduced service consistency

B.11 Why these failures persist

These patterns are not caused by incompetence. They persist because:

  • Growth information is siloed
  • Forecasting is viewed as speculative
  • Political incentives favor short-term restraint
  • Capital planning horizons are too short

Absent a formal framework, cities default to reaction.


B.12 Summary for governing bodies

Cities that do not integrate development approvals into population-driven forecasting commonly experience:

  1. Perceived “surprise” growth
  2. Emergency staffing responses
  3. Repeated under- and over-sizing
  4. Facilities that age prematurely
  5. Higher long-term costs
  6. Organizational strain
  7. Reduced public confidence

None of these outcomes are inevitable. They are symptoms of not using information the city already has.


B.13 Closing observation

The contrast between proactive and reactive cities is not one of optimism versus pessimism. It is a difference between:

  • Anticipation versus reaction
  • Sequencing versus scrambling
  • Planning versus explaining after the fact

Population-driven forecasting does not eliminate uncertainty. It replaces surprise with preparation.


Appendix C

Population Readiness & Forecasting Discipline Checklist

A self-assessment for proactive versus reactive cities

Purpose:
This checklist allows a city to evaluate whether it is systematically anticipating population growth—or discovering it after impacts occur. It is designed for use by city management teams, finance directors, auditors, and governing bodies.

How to use:
For each item, mark:

  • Yes / In place
  • ⚠️ Partially / Informal
  • No / Not done

Patterns matter more than individual answers.


Section 1 — Visibility of Future Population

C-1 Do we maintain a consolidated list of annexed, zoned, and entitled land with estimated buildout population?

C-2 Are preliminary and final plats tracked in a format usable by finance and operations (not just planning)?

C-3 Do we estimate population by development phase, not just at full buildout?

C-4 Is there a documented method for converting lots or units into population (household size assumptions reviewed periodically)?

C-5 Do we distinguish between long-range potential growth and near-term probable growth?

Red flag:
Population is discussed primarily in narrative terms (“fast growth,” “slowing growth”) rather than quantified and staged.


Section 2 — Timing and Lead Indicators

C-6 Do we identify which development milestone triggers planning action (e.g., preliminary plat vs final plat)?

C-7 Are infrastructure completion schedules incorporated into population timing assumptions?

C-8 Are water meter installations or equivalent utility connections tracked and forecasted?

C-9 Do we use certificates of occupancy to validate and recalibrate population forecasts annually?

C-10 Is population forecasting treated as a rolling forecast, not a once-per-year estimate?

Red flag:
Population is updated only when census or ACS data is released.


Section 3 — Staffing Linkage

C-11 Does each major department have an identified population or workload driver?

C-12 Are fixed minimum staffing levels explicitly separated from growth-driven staffing?

C-13 Are staffing increases tied to forecasted population arrival, not service breakdowns?

C-14 Do hiring plans account for lead times (recruitment, academies, training)?

C-15 Can we explain recent staffing increases as either:

  • population growth, or
  • explicit policy/service-level changes?

Red flag:
Staffing requests frequently cite “we are behind” without reference to forecasted growth.


Section 4 — Facilities and Capital Planning

C-16 Are facility size requirements derived from staffing projections, not current headcount?

C-17 Do capital plans include expansion thresholds (e.g., headcount or service load triggers)?

C-18 Are new facilities designed with future expansion capability?

C-19 Are entitled-but-unoccupied developments considered when evaluating future facility adequacy?

C-20 Do we avoid building facilities that are at or near capacity on opening day?

Red flag:
Facilities require major expansion within a few years of completion.


Section 5 — Operating Cost Awareness

C-21 Are operating costs (utilities, maintenance, custodial) modeled as a function of facility size and assets?

C-22 Are utility cost impacts of expansion estimated before facilities are approved?

C-23 Do we understand how population growth affects indirect departments (HR, IT, finance)?

C-24 Are lifecycle replacement costs considered when adding capacity?

Red flag:
Operating cost increases appear as “unavoidable surprises” after facilities open.


Section 6 — Cross-Department Integration

C-25 Do planning, finance, and operations use the same population assumptions?

C-26 Is growth discussed in joint meetings, not only within planning?

C-27 Does finance receive regular updates on development pipeline status?

C-28 Are growth assumptions documented and shared, not implicit or informal?

Red flag:
Different departments give different growth narratives to council.


Section 7 — Governance and Transparency

C-29 Can we clearly explain to council why staffing or capital is needed before service failure occurs?

C-30 Are population-driven assumptions documented in budget books or CIP narratives?

C-31 Do we distinguish between:

  • growth-driven needs, and
  • discretionary service enhancements?

C-32 Can auditors or rating agencies trace growth-related decisions back to documented approvals?

Red flag:
Growth explanations rely on urgency rather than evidence.


Section 8 — Validation and Learning

C-33 Do we compare forecasted population arrival to actual COs annually?

C-34 Are forecasting errors analyzed and corrected rather than ignored?

C-35 Do we adjust household size, absorption rates, or timing assumptions over time?

Red flag:
Forecasts remain unchanged year after year despite clear deviations.


Scoring Interpretation (Optional)

  • Mostly ✅ → Proactive, anticipatory city
  • Mix of ✅ and ⚠️ → Partially planned, risk of reactive behavior
  • Many ❌ → Reactive city; growth will feel like a surprise

A city does not need perfect scores. The presence of structure, documentation, and sequencing is what matters.


Closing Note for Leadership

If a city can answer most of these questions affirmatively, it is not guessing about growth—it is managing it. If many answers are negative, the city is likely reacting to outcomes it had the power to anticipate.

Population growth does not cause planning problems.
Ignoring known growth signals does.


Appendix D

Population-Driven Planning Maturity Model

A framework for assessing and improving municipal forecasting discipline

Purpose of this appendix

This maturity model describes how cities evolve in their ability to anticipate population growth and translate it into staffing, facility, and financial planning. It recognizes that most cities are not “good” or “bad” planners; they are simply at different stages of organizational maturity.

Each level builds logically on the prior one. Advancement does not require perfection—only structure, integration, and discipline.


Level 1 — Reactive City

“We didn’t see this coming.”

Characteristics

  • Population discussed only after impacts are felt
  • Reliance on census or anecdotal indicators
  • Growth described qualitatively (“exploding,” “slowing”)
  • Staffing added only after service failure
  • Capital projects triggered by visible overcrowding
  • Frequent mid-year budget amendments

Typical behaviors

  • Emergency staffing requests
  • Heavy overtime usage
  • Facilities opened already constrained
  • Surprise operating cost increases

Organizational mindset

Growth is treated as external and unpredictable.

Risks

  • Highest long-term cost
  • Lowest credibility with councils and rating agencies
  • Chronic organizational stress

Level 2 — Aware but Unintegrated City

“Planning knows growth is coming, but others don’t act on it.”

Characteristics

  • Development pipeline tracked by planning
  • Finance and operations not fully engaged
  • Growth acknowledged but not quantified in budgets
  • Capital planning still reactive
  • Limited documentation of assumptions

Typical behaviors

  • Late staffing responses despite known development
  • Facilities planned using current headcount
  • Disconnect between planning reports and budget narratives

Organizational mindset

Growth is known, but not operationalized.

Risks

  • Continued surprises
  • Internal frustration
  • Mixed messages to council

Level 3 — Structured Forecasting City

“We model growth, but execution lags.”

Characteristics

  • Population forecasts tied to development approvals
  • Preliminary staffing models exist
  • Fixed minimums recognized
  • Capital needs identified in advance
  • Forecasts updated annually

Typical behaviors

  • Better budget explanations
  • Improved CIP alignment
  • Still some late responses due to execution gaps

Organizational mindset

Growth is forecastable, but timing discipline is still developing.

Strengths

  • Credible analysis
  • Reduced emergencies
  • Clearer governance conversations

Level 4 — Integrated Planning City

“Approvals, staffing, and capital move together.”

Characteristics

  • Development pipeline drives population timing
  • Staffing plans phased to population arrival
  • Facility sizing based on projected headcount
  • Operating costs modeled from assets
  • Cross-department coordination is routine

Typical behaviors

  • Hiring planned ahead of demand
  • Facilities open with expansion capacity
  • Capital timed to avoid crisis spending
  • Clear audit trail from approvals to costs

Organizational mindset

Growth is managed, not reacted to.

Benefits

  • Stable service delivery during growth
  • Higher workforce morale
  • Strong credibility with governing bodies

Level 5 — Adaptive, Data-Driven City

“We learn, recalibrate, and optimize continuously.”

Characteristics

  • Rolling population forecasts
  • Development milestones tracked in near-real time
  • Annual validation against COs and utility data
  • Forecast errors analyzed and corrected
  • Scenario modeling for alternative growth paths

Typical behaviors

  • Minimal surprises
  • High confidence in long-range plans
  • Early identification of inflection points
  • Proactive communication with councils and investors

Organizational mindset

Growth is a controllable system, not a threat.

Benefits

  • Lowest lifecycle cost
  • Highest service reliability
  • Institutional resilience

Summary Table

LevelDescriptionCore Risk
1ReactiveCrisis-driven decisions
2Aware, unintegratedLate responses
3StructuredExecution lag
4IntegratedFew surprises
5AdaptiveMinimal risk

Key Insight

Most cities are not failing—they are stuck between Levels 2 and 3. The largest gains come not from sophisticated analytics, but from integration and timing discipline.

Progression does not require:

  • Perfect forecasts
  • Advanced software
  • Large consulting engagements

It requires:

  • Using approvals the city already grants
  • Sharing population assumptions across departments
  • Sequencing decisions intentionally

Closing Observation

Cities do not choose whether they grow. They choose whether growth feels like a surprise or a scheduled event.

This maturity model makes that choice visible.

For Those Weary of Yet One More Survey

A collaboration between Lewis McLain & AI (Suggested by Becky Brooks)

Here is a funny, light-hearted, non-offensive survey designed as if a city or organization created it, full of the same bureaucratic absurdity but tailored for someone who’s just spent a couple of weeks in jail.

It is intentionally ridiculous — the kind of tone-deaf survey a city might send, trying to measure the “customer experience.”

POST-INCARCERATION CUSTOMER SATISFACTION SURVEY

Because your feedback helps us improve the parts of the experience we had no intention of improving.

Thank you for recently spending 10–45 days with us!

Your stay matters to us, and we’d love your thoughts.

Please take 3–90 minutes to complete this survey.

SECTION 1 — OVERALL EXPERIENCE

1. How satisfied were you with your recent incarceration?

    •    ☐ Very Satisfied

    •    ☐ Satisfied

    •    ☐ Neutral (emotionally or spiritually)

    •    ☐ Dissatisfied

    •    ☐ Very Dissatisfied

    •    ☐ I would like to speak to the manager of jail, please

2. Would you recommend our facility to friends or family?

    •    ☐ Yes, absolutely

    •    ☐ Only if they deserve it

    •    ☐ No, but I might recommend it to my ex

3. Did your stay meet your expectations?

    •    ☐ It exceeded them, shockingly

    •    ☐ It met them, sadly

    •    ☐ What expectations?

    •    ☐ I didn’t expect any of this

SECTION 2 — ACCOMMODATIONS

4. How would you rate the comfort of your sleeping arrangements?

    •    ☐ Five stars (would book again on Expedia)

    •    ☐ Three stars (I’ve slept on worse couches)

    •    ☐ One star (my back may sue you)

    •    ☐ Zero stars (please never ask this again)

5. How would you describe room service?

    •    ☐ Prompt and professional

    •    ☐ Present

    •    ☐ Sporadic

    •    ☐ I was unaware room service was an option

    •    ☐ Wait… was that what breakfast was supposed to be?

SECTION 3 — DINING EXPERIENCE

6. Rate the culinary artistry of our meals:

    •    ☐ Michelin-worthy

    •    ☐ Edible with effort

    •    ☐ Mysterious but survivable

    •    ☐ I have questions that science cannot answer

7. Did you enjoy the variety of menu options?

    •    ☐ Yes

    •    ☐ No

    •    ☐ I’m still not sure if Tuesday’s entrée was food

SECTION 4 — PROGRAMMING & ACTIVITIES

8. Which of the following activities did you participate in?

    •    ☐ Walking in circles

    •    ☐ Sitting

    •    ☐ Thinking about life

    •    ☐ Thinking about lunch

    •    ☐ Wondering why time moves slower in here

    •    ☐ Other (please describe your spiritual journey): ___________

9. Did your stay include any unexpected opportunities for personal growth?

    •    ☐ Learned patience

    •    ☐ Learned humility

    •    ☐ Learned the legal system very quickly

    •    ☐ Learned I never want to fill out this survey again

SECTION 5 — CUSTOMER SERVICE

10. How would you rate the friendliness of staff?

    •    ☐ Surprisingly pleasant

    •    ☐ Professionally indifferent

    •    ☐ “Move over there” was said with warmth

    •    ☐ I think they liked me

    •    ☐ I think they didn’t

11. Did staff answer your questions in a timely manner?

    •    ☐ Yes

    •    ☐ No

    •    ☐ I’m still waiting

    •    ☐ I learned not to ask questions

SECTION 6 — RELEASE PROCESS

12. How smooth was your release experience?

    •    ☐ Smooth

    •    ☐ Mostly smooth

    •    ☐ Bumpy

    •    ☐ Like trying to exit a maze blindfolded

13. Upon release, did you feel ready to re-enter society?

    •    ☐ Yes, I am reborn

    •    ☐ Somewhat

    •    ☐ Not at all

    •    ☐ Please define “ready”

SECTION 7 — FINAL COMMENTS

14. If you could change one thing about your stay, what would it be?

(Please choose only one):

    •    ☐ The walls

    •    ☐ The food

    •    ☐ The schedule

    •    ☐ The length of stay

    •    ☐ All of the above

    •    ☐ I decline to answer on advice of counsel

15. Additional feedback for management:

(Comments will be carefully reviewed by someone someday.)

Thank You!

Your answers will be used to improve future guest experiences,*

though absolutely no one can guarantee that.

The Mind of an Inventor: The Common Thread of Creation

A collaboration between Lewis McLain & AI



I. Introduction — The Spark That Changes the World

Every great invention begins not in a laboratory but in a restless mind that refuses to accept things as they are. The inventor lives in the thin air between wonder and frustration: the wonder of seeing what might be, and the frustration that it does not yet exist.

To invent is to cross the border between imagination and matter—between “why not?” and “now it works.” Across centuries, the world’s greatest inventors have built in different mediums—stone, steam, circuits, code—yet share the same mental wiring: curiosity that won’t rest, courage that won’t quit, and a faith that imagination can serve humanity.


II. The Inventive Mindset

The inventor’s mind is a paradox. It thrives on both chaos and order, fantasy and formula.

  • Curiosity is its compass—an ache to understand how things work and how they could work better.
  • Observation is its lens—seeing patterns others overlook.
  • Playfulness is its fuel—testing ideas without fear of failure.
  • Persistence is its backbone—enduring the thousand prototypes that don’t succeed.

Failure doesn’t frighten the inventor; indifference does. To stop asking “why” is a far greater tragedy than a circuit that burns or a model that breaks.


III. Ten Inventors, Ten Windows into the Mind of Creation

Leonardo da Vinci — Sketching the Sky Before It Existed

Leonardo filled his notebooks with wings, gears, and impossible dreams. He studied the curve of a bird’s feather as if decoding a sacred language.

“Once you have tasted flight,” he wrote, “you will forever walk the earth with your eyes turned skyward.”
He painted with one hand and designed with the other, proving that art and engineering are not rivals but reflections. His flying machines never left the ground, yet every modern aircraft carries a trace of his ink.


Benjamin Franklin — Harnessing Heaven for Humanity

Franklin saw storms not as terrors but as teachers. He tied a key to a kite and coaxed lightning to reveal its secret kinship with electricity.

“Electric fire,” he marveled, “is of the same kind with that which is in the clouds.”
The lightning rod followed—a humble spike that saved countless roofs. His bifocals, his stove, his civic inventions all arose from empathy: an elder’s eyes, a neighbor’s cold house, a printer’s smoky air. He turned curiosity into charity.


Eli Whitney — The Engineer Who Made Things Fit

Whitney watched field hands comb seeds from cotton and thought, There must be a better way. His wire-toothed drum and brush—the cotton gin—sped production a hundredfold.

“It was a small thing,” he later said, “but small things change empires.”
The gin enriched the South and, tragically, deepened slavery. Seeking redemption through precision, Whitney built the first system of interchangeable parts, proving that uniformity could multiply freedom of production. He changed not just a crop but the logic of industry.


Thomas Edison — The Factory of Light

At Menlo Park, light spilled from the windows while others slept. Inside, hundreds of filaments burned and failed.

“I haven’t failed,” Edison smiled. “I’ve found ten thousand ways that won’t work.”
When carbonized bamboo finally glowed for 1,200 hours, he built an entire electric ecosystem—power plants, wiring, meters, sockets. His true invention was not the bulb but the process of systematic innovation itself.


Nikola Tesla — The Dream That Outran Its Century

Tesla lived amid lightning of his own making. To him, the universe pulsed with invisible currents waiting to be tamed.

“The moment I imagine a device,” he claimed, “I can make it run in my mind.”
His AC induction motor and polyphase system powered cities from Niagara Falls. His dream of wireless energy bankrupted him but electrified the future. In him, imagination was not daydreaming—it was blueprinting.


Marie Curie — The Glow of the Invisible

In a shed that smelled of acid and hope, Curie boiled tons of pitchblende until a speck of radium glowed.

“Nothing in life is to be feared,” she said, “it is only to be understood.”
Her discovery of radioactivity opened new worlds of medicine and physics. During World War I she outfitted trucks with X-rays, saving thousands of soldiers. Science for her was not ambition—it was service illuminated.


The Wright Brothers — Learning the Language of Air

In their Dayton workshop, the Wrights balanced on wings of wood and faith. They built a wind tunnel, measured lift with bicycle parts, and studied every gust as if air itself were a textbook.

“The bird doesn’t just rise,” Wilbur observed, “it balances.”
Their 1903 flight at Kitty Hawk lasted only seconds, yet the world’s horizon shifted forever. They proved that methodical curiosity could conquer gravity itself.


Albert Einstein — Thought as an Instrument

Einstein’s laboratory was his imagination. He pictured himself chasing a beam of light and realized time might bend to keep pace.

“Imagination,” he said, “is more important than knowledge.”
From that image grew relativity, which remade physics. Yet his most practical insight—the photoelectric effect—became the foundation of solar power. Einstein invented with ideas instead of tools, showing that creativity can re-engineer reality.


Steve Jobs — The Art of Simplicity

Jobs demanded elegance as fiercely as others demanded speed. He fused hardware and software into harmony.

“It just works,” he’d say, though it took a thousand revisions to reach that ease.
The Mac, the iPod, the iPhone—each was less a gadget than a philosophy: that design is love made visible. Jobs reinvented the personal device by stripping it down until only meaning remained.


Tim Berners-Lee — The Architect of the Digital Commons

In a corridor at CERN, Berners-Lee envisioned scientists everywhere linking their work with one simple syntax.

“I just wanted a way for people to share what they knew.”
He built HTTP, HTML, and the first web server—then released them freely. No patents, no gatekeepers. His generosity made the World Wide Web the shared library of humankind.


Together they form a single conversation across centuries. Leonardo sketched the dream of flight; the Wrights gave it wings. Franklin tamed electricity; Tesla made it sing; Edison wired it into homes. Curie revealed invisible forces; Einstein explained them. Jobs and Berners-Lee re-channeled that same human spark into light made of code. Each voice answers the one before it, echoing: The world can be improved, and I will try.


IV. The Invisible Thread — Purpose and Pattern

Behind every experiment lies a conviction: that the universe is intelligible and worth improving.
Their shared geometry is imagination → iteration → illumination.
They teach that invention is not chaos but a form of hope—faith that our designs, however imperfect, can serve life itself. The true legacy of invention is not a patent portfolio; it is a pattern of thinking that turns wonder into welfare.


V. Conclusion — Love, Made Useful

The mind of an inventor is not born whole. It is forged in curiosity, hammered by failure, and tempered by empathy. These ten lives remind us that progress is a moral act, rooted in patience and compassion.

To think like an inventor is to love the world enough to fix it—to build not merely for profit or prestige but for people yet unborn. Invention, at its purest, is love that learned to use its hands.


Appendix — Biographical Notes and Key Inventions

Leonardo da Vinci — Italian polymath; foresaw helicopters, tanks, and canal locks through meticulous study of anatomy and motion.
Key: flight sketches, helical air screw, gear systems.

Benjamin Franklin — Printer, scientist, diplomat; proved lightning’s electrical nature; invented lightning rod, bifocals, Franklin stove.
Key: electrical experiments, civic innovations.

Eli Whitney — American engineer; built the cotton gin and standardized interchangeable parts for firearms, shaping mass production.
Key: cotton gin, precision tooling.

Thomas Edison — Inventor-entrepreneur; created the practical light system, phonograph, and motion picture camera; pioneered industrial R&D.
Key: incandescent lamp, phonograph, Kinetoscope.

Nikola Tesla — Serbian-American engineer; developed AC motors, polyphase power, radio principles, and the Tesla coil.
Key: alternating-current system, wireless power concepts.

Marie Curie — Physicist-chemist; discovered radium and polonium; founded radiology; first double Nobel laureate.
Key: radioactivity research, mobile X-rays.

Orville & Wilbur Wright — American aviation pioneers; invented three-axis control, conducted first powered flight.
Key: controlled flight, wind-tunnel data.

Albert Einstein — Theoretical physicist; formulated relativity, explained photoelectric effect, father of modern physics.
Key: relativity, photoelectric effect.

Steve Jobs — Apple co-founder; integrated technology and design into consumer art; drove personal computing and mobile revolutions.
Key: Macintosh, iPod/iTunes, iPhone, iPad.

Tim Berners-Lee — British computer scientist; created the World Wide Web’s foundational architecture and kept it open.
Key: URL, HTTP, HTML, first web server/browser.


🎨 Painting Concept: “The Council of Inventors”

Setting:
A softly lit Renaissance-style hall that feels timeless — stone arches overhead, candlelight mingling with the faint glow of electricity. At the center, a great oak table curves like an infinity symbol, symbolizing endless human curiosity. Around it, the ten inventors gather in dialogue — not chronological, but thematic, their inventions subtly illuminating the room.


Foreground Figures

  • Leonardo da Vinci stands near the left, sketchbook open, gesturing midair with a quill as though explaining the curvature of wings. His gaze meets the Wright Brothers, who are bent over a small model glider resting on the table.
  • Benjamin Franklin leans in nearby, one hand on a metal key, the other holding a faintly glowing lightning rod that arcs softly — the light blending into the candle glow.
  • Across from him, Edison adjusts a glowing bulb, its light reflecting in Franklin’s spectacles. Behind him, Nikola Tesla gazes upward, a tiny arc of blue current jumping between his fingertips, illuminating the diagram behind them.

Middle Figures

  • Eli Whitney sits near the table’s midpoint, hands on precision tools and calipers, his musket parts laid out like a puzzle. The Wright Brothers’ propeller model rests beside his gear molds, symbolizing the bridge between ground and air.
  • Marie Curie stands slightly apart, her face serene but determined, holding a small vial that emits a gentle ethereal light — a faint halo of pale blue radiance, illuminating her lab notes.
  • Albert Einstein leans over her shoulder, pipe in hand, scribbling light equations on a parchment that glow faintly, as if chalked by photons.

Background Figures

  • Steve Jobs is seated farther right, dressed in his signature black turtleneck — timeless among them — explaining the first iPhone to Tim Berners-Lee, who nods thoughtfully while holding a glowing string of code shaped like a thread of light. Between them, a subtle digital aura rises — a lattice of glowing lines suggesting the web connecting every mind in the room.

Drones as a Core Municipal Utility: Policy, Training, and Future Directions for Texas Cities

A collaboration between Lewis McLain and AI



Executive Summary

Municipal drone programs have rapidly evolved from experimental projects to dependable service tools. Today, Texas cities are beginning to treat drones not as gadgets but as core municipal utilities—shared resources as essential as fleet management, radios, or GIS. Properly implemented, drones can provide faster response times, safer job conditions, and higher-quality data, all while saving taxpayer money.

This paper explains how cities can build and sustain a municipal drone program. It examines current and emerging use cases, outlines staffing impacts, surveys training options and costs in Texas, explores fleet models and procurement, and considers the legal, policy, and community dimensions that must be addressed. It concludes with recommendations, case studies of failures, and appendices on payload regulation and FAA sample exam questions.

Handled wisely, drones will make cities safer, smarter, and more responsive. Mishandled, they risk creating public backlash, wasting funds, or even eroding trust.



The Case for Treating Drones as a Utility

Cities that succeed with drones do so by thinking of them as utilities, not toys. A drone program should be centrally governed, jointly funded, and transparently managed. Just like a municipal fleet or IT department, a citywide drone service must be reliable, equitable across departments, compliant with law, interoperable with other systems, and transparent to the public.

This approach ensures that drones are available where needed, that policies are consistent across departments, and that costs are shared fairly. Most importantly, it signals to residents that the city treats drone use seriously, with strong safeguards and clear accountability.



Current and Growing Uses

Across Texas and the country, municipal drones already serve a wide range of functions.

Public Safety: Police and fire agencies use drones as “first responders,” launching them from stations or rooftops to 911 calls. They provide live video of car crashes, fires, or hazardous scenes, often arriving before officers. Firefighters use drones with thermal cameras to locate victims or track hotspots in burning buildings.

Infrastructure and Public Works: Drones inspect bridges, culverts, roofs, and water towers. Instead of sending workers onto scaffolds or into confined spaces, crews now fly drones that capture detailed photos and 3D models. Landfills are surveyed from the air, methane leaks identified, and storm damage mapped quickly after major events.

Transportation and Planning: Drones monitor traffic flow, study queue lengths, and document work zones. City planners use them to create up-to-date maps, support zoning decisions, and maintain digital twins of urban areas.

Environmental and Health: From checking stormwater outfalls to mapping tree canopies, drones help environmental staff monitor city assets. In some regions, drones are used to identify standing water and apply larvicides for mosquito control.

Emergency Management: After floods, hurricanes, or tornadoes, drones provide rapid situational awareness, helping cities prioritize response and document damage for FEMA claims.

As automation improves, “drone-in-a-box” systems—drones that launch on schedule or in response to sensors—will soon become common municipal tools.



Staffing Impacts

A common fear is that drones will replace jobs. In practice, they save lives and money while creating new roles.

Jobs Saved: By reducing risky tasks like climbing scaffolds or entering confined spaces, drones make existing jobs safer. They also reduce overtime by finishing inspections or surveys in hours instead of days.

Jobs Added: Cities now employ drone program coordinators, FAA Part 107-certified pilots, data analysts, and compliance officers. A medium-sized Texas city might add ten to twenty such roles over the next five years.

Jobs Shifted: Inspectors, police officers, and firefighters increasingly become “drone-enabled” workers, adding aerial operations to their responsibilities. Over time, 5–10% of municipal staff in critical departments may be retrained in drone use.

The net result is redistribution rather than reduction. Drones are not eliminating jobs; they are elevating them.



Training in Texas

FAA rules require every commercial or government drone operator to hold a Part 107 Remote Pilot Certificate. Fortunately, Texas offers many affordable training options.

Community colleges such as Midland College and South Plains College provide Part 107 prep and hands-on flight training, typically costing $350 to $450 per course. Private providers like Dronegenuity and From Above Droneworks offer in-person and hybrid courses ranging from $99 online modules to $1,200 full academies. San Jacinto College and other universities run short workshops and certification tracks.

Online exam prep courses are widely available for $150–$400, making it feasible to train multiple staff at once. When departments train together, cities often negotiate group discounts and host joint scenario days at municipal training grounds.


Fleet Models and Costs

Municipal needs vary, but most cities benefit from a tiered fleet.

  • Micro drones (under 250g) for training and quick checks: $500–$1,200.
  • Utility quads for mapping and inspection: $2,500–$6,500.
  • Enterprise drones with thermal sensors for public safety: $7,500–$16,000.
  • Heavy-lift or VTOL systems for long corridors or specialized sensors: $18,000–$45,000+.

Each drone has a three- to five-year lifespan, with batteries refreshed every 200–300 cycles. Cities must also budget for accessories, insurance, and management software.



Policy and Legal Landscape

Federally, the FAA regulates drone operations under Part 107. Rules limit altitude to 400 feet, require flights within visual line of sight, and mandate Remote ID for most aircraft. Waivers can allow for advanced operations, such as flying beyond visual line of sight (BVLOS).

In Texas, additional laws restrict image capture in certain contexts and impose rules around critical infrastructure. Local governments cannot regulate airspace, but they can and should regulate employee conduct, data use, privacy, and procurement.

Transparency is crucial. Cities must publish clear retention policies, flight logs, and citizen FAQs.


Privacy, Labor, and Community Trust

For communities to embrace drones, cities must be proactive.

Privacy: Drones should collect only what is necessary, with cameras pointed at mission targets rather than private backyards. Non-evidentiary footage should be deleted within 30–90 days.

Labor: Cities should emphasize that drones augment rather than replace workers. They shift dangerous tasks to machines while providing staff new certifications and career paths.

Equity: Larger cities may advance faster than small towns, but shared services, inter-local agreements, and regional training programs can close the gap.

Community Trust: Transparency builds legitimacy. Cities should publish quarterly metrics, log complaints, host public demos, and maintain a clear point of contact for concerns.


Lessons from Failures

Not every program has succeeded. Across the country, drone initiatives have stumbled in predictable ways:

  • Community Pushback: Chula Vista’s pioneering drone-as-first-responder program drew criticism for surveillance concerns, while New York City’s holiday monitoring drones sparked public backlash. Lesson: transparency and engagement must come first.
  • Operational Incidents: A Charlotte police drone crashed into a house, and some agencies lost FAA waivers due to compliance lapses. Lesson: one mistake can jeopardize an entire program; training and discipline are essential.
  • Budget Failures: Dallas and other cities saw expansions stall over hidden costs for software and maintenance. Smaller towns wasted funds buying consumer drones that quickly wore out. Lesson: plan for lifecycle costs, not just hardware.
  • Legal Overreach: Connecticut’s proposal to arm police drones with “less-lethal” weapons collapsed amid backlash, while San Diego faced court challenges over warrant requirements. Lesson: pushing boundaries invites restrictions.
  • Scaling Gaps: Rural Texas counties bought drones with grants but lacked certified pilots or insurance. Small towns gathered imagery but had no analysts to use it. Lesson: drones without people and integration are wasted purchases.

Recommendations

  1. Invest in training through Texas colleges and private providers.
  2. Procure wisely, choosing modular, upgradeable hardware.
  3. Adopt clear policies on payloads, privacy, and data retention.
  4. Prioritize non-kinetic payloads such as cameras, sensors, and lighting.
  5. Prepare for BVLOS, which will transform municipal use once authorized.
  6. Ensure equity, supporting smaller cities through regional cooperation.

Conclusion

Drones are no longer experimental novelties. They are rapidly becoming a core municipal utility—a shared service as essential as public works fleets or GIS. Their greatest promise lies not in flashy technology but in the steady, practical benefits they bring: safer workers, faster response, better data, and more transparent government.

But the promise depends on choices. Cities must prohibit weaponized payloads, publish clear policies, train and retrain staff, and engage openly with their communities. Done right, drones can strengthen both city effectiveness and public trust.


Appendix A: Administrative Regulation on Payloads

Title: Drone Payloads and Weapons Prohibition; Data & Safety Controls
Number: AR-UAS-01
Effective Date: Upon issuance
Applies To: All city employees, contractors, volunteers, or agents operating drones (UAS) on behalf of the City


1. Purpose

This regulation ensures that all municipal drone operations are conducted lawfully, ethically, and safely. It establishes clear prohibitions on weaponized or harmful payloads and sets minimum standards for data use, transparency, and accountability.


2. Definitions

  • UAS (Drone): An uncrewed aircraft and associated equipment used for flight.
  • Payload: Any item attached to or carried by a UAS, including cameras, sensors, lights, speakers, or drop mechanisms.
  • Weaponized or Prohibited Payload: Any device or substance intended to incapacitate, injure, damage, or deliver kinetic, chemical, electrical, or incendiary effects.
  • Authorized Payload: Sensors or devices explicitly approved by the UAS Program Manager for municipal purposes.

3. Policy Statement

  • The City strictly prohibits the use of weaponized or prohibited payloads on all drones.
  • Drones may only be used for documented municipal purposes, consistent with law, FAA rules, and City policy.
  • All payloads must be inventoried and approved by the UAS Program Manager.

4. Prohibited Payloads

The following are expressly prohibited:

  • Firearms, ammunition, or explosive devices.
  • Pyrotechnic, incendiary, or chemical agents (including tear gas, pepper spray, smoke bombs).
  • Conducted electrical weapons (e.g., TASER-type devices).
  • Projectiles, hard object drop devices, or kinetic impact payloads intended for crowd control.
  • Covert audio or visual recording devices in violation of state or federal law.

Exception: Non-weaponized lifesaving payloads (e.g., flotation devices, first aid kits, rescue lines) may be deployed only with prior written approval of the Program Manager and after a documented risk assessment.


5. Authorized Payloads

Authorized payloads include, but are not limited to:

  • Imaging sensors (visual, thermal, multispectral, LiDAR).
  • Environmental sensors (methane detectors, gas analyzers, air quality monitors).
  • Lighting systems (searchlights, strobes).
  • Loudspeakers for announcements or evacuation instructions.
  • Non-weaponized emergency supply drops (medical kits, flotation devices).
  • Tethered systems for persistent observation or communications relay.

6. Oversight and Accountability

  • The UAS Program Manager must approve all payload configurations before deployment.
  • Departments must maintain an updated inventory of drones and payloads.
  • Quarterly inspections will be conducted to verify compliance.
  • An annual public report will summarize drone use, payload types, and incidents.

7. Data Controls

  • Minimization: Only record what is necessary for the mission.
  • Retention:
    • Non-evidentiary footage: 30–90 days.
    • Evidentiary footage: retained per case law.
    • Mapping/orthomosaics: retained per project records schedule.
  • Access: Role-based permissions, with audit logs.
  • Public Release: Media released under public records law must be reviewed for privacy and redaction (faces, license plates, sensitive sites).

8. Training Requirements

  • All operators must hold an FAA Part 107 Remote Pilot Certificate.
  • Annual city-approved training on:
    • This regulation (AR-UAS-01).
    • Privacy and data retention.
    • Citizen engagement and de-escalation.
  • Scenario-based training must be conducted at least once per year.

9. Enforcement

  • Violations of this regulation may result in disciplinary action up to and including termination of employment or contract.
  • Prohibited payloads will be confiscated, logged, and removed from service.
  • Cases involving unlawful weaponization will be referred for criminal investigation.

10. Effective Date

This regulation is effective immediately upon approval by the City Manager and shall remain in force until amended or rescinded.

Appendix B: FAA Part 107 Sample Questions (Representative, 25 Items)

Note: These questions are drawn from FAA study materials and training resources. They are not live exam questions but are representative of the knowledge areas tested.

  1. Under Part 107, what is the maximum allowable altitude for a small UAS?
     A. 200 feet AGL
     B. 400 feet AGL ✅
     C. 500 feet AGL
  2. What is the maximum ground speed allowed?
     A. 87 knots (100 mph) ✅
     B. 100 knots (115 mph)
     C. 87 mph
  3. To operate a small UAS for commercial purposes, which certification is required?
     A. Private Pilot Certificate
     B. Remote Pilot Certificate with a small UAS rating ✅
     C. Student Pilot Certificate
  4. Which airspace requires ATC authorization for UAS operations?
     A. Class G
     B. Class C ✅
     C. Class E below 400 ft
  5. How is controlled airspace authorization obtained?
     A. Verbal ATC request
     B. Filing a VFR flight plan
     C. Through LAANC or DroneZone ✅
  6. Minimum visibility requirement for Part 107 operations?
     A. 1 statute mile
     B. 3 statute miles ✅
     C. 5 statute miles
  7. Required distance from clouds?
     A. 500 feet below, 2,000 feet horizontally ✅
     B. 1,000 feet below, 1,000 feet horizontally
     C. No minimum distance
  8. A METAR states: KDAL 151853Z 14004KT 10SM FEW040 30/22 A2992. What is the ceiling?
     A. Clear skies
     B. 4,000 feet few clouds ✅
     C. 4,000 feet broken clouds
  9. A TAF includes BKN020. What does this mean?
     A. Broken clouds at 200 feet
     B. Broken clouds at 2,000 feet ✅
     C. Overcast at 20,000 feet
  10. High humidity combined with high temperature generally results in:
     A. Increased performance
     B. Reduced performance ✅
     C. No effect
  11. If a drone’s center of gravity is too far aft, what happens?
     A. Faster than normal flight
     B. Instability, difficult recovery ✅
     C. Less battery use
  12. High density altitude (hot, high, humid) causes:
     A. Increased battery life
     B. Decreased propeller efficiency, shorter flights ✅
     C. No effect
  13. A drone at max gross weight of 55 lbs carries a 10 lb payload. Payload percent?
     A. 18% ✅
     B. 10%
     C. 20%
  14. At maximum gross weight, performance is:
     A. Improved stability
     B. Reduced maneuverability and endurance ✅
     C. No change
  15. The purpose of Crew Resource Management is:
     A. To reduce paperwork
     B. To use teamwork and communication to improve safety ✅
     C. To reduce training costs
  16. GPS signal lost and drone drifts — first action?
     A. Immediate Return-to-Home
     B. Switch to ATTI/manual mode, maintain control, land ✅
     C. Climb higher for GPS
  17. If a drone causes $500+ in property damage, what is required?
     A. Report only to local police
     B. FAA report within 10 days ✅
     C. No report required
  18. If the remote PIC is incapacitated, the visual observer should:
     A. Land the drone ✅
     B. Call ATC
     C. Wait until PIC recovers
  19. On a sectional chart, a magenta vignette indicates:
     A. Class E starting at surface ✅
     B. Class C boundary
     C. Restricted airspace
  20. A dashed blue line on a sectional chart indicates:
     A. Class B airspace
     B. Class D airspace ✅
     C. Class G airspace
  21. A magenta dashed circle indicates:
     A. Class E starting at surface ✅
     B. Class G airspace
     C. No restrictions
  22. Floor of Class E when sectional shows fuzzy side of a blue vignette?
     A. Surface
     B. 700 feet AGL ✅
     C. 1,200 feet AGL
  23. Main concern with fatigue while flying?
     A. Reduced battery performance
     B. Slower reaction and poor decision-making ✅
     C. Increased radio interference
  24. Alcohol is prohibited within how many hours of UAS operation?
     A. 4 hours
     B. 8 hours ✅
     C. 12 hours
  25. Maximum allowable BAC for remote pilots?
     A. 0.08%
     B. 0.04% ✅
     C. 0.02%