The New York Nurses’ Strike, AI, and the Question Every Profession Is About to Face

A collaboration between Lewis McLain & AI

The threatened nurses’ strike in New York City today is being discussed as a labor dispute, but it is better understood as a systems negotiation under financial pressure. Thousands of registered nurses represented by the New York State Nurses Association (NYSNA) have pushed back against major hospital systems—including Mount Sinai Health System, Montefiore Medical Center, and NewYork-Presbyterian—over staffing, workload, and the terms under which new technology is introduced into care.

To understand what is really happening, one has to acknowledge both sides of the pressure. Nurses are stretched thin. But hospital administrators are also operating in an environment of rising labor costs, payer constraints, regulatory exposure, and reputational risk. AI enters this moment not as a villain or savior, but as a lever—one that can be pulled well or badly.


The Clinical Reality: A Team Under Strain

Modern hospital care is not delivered by a single role. It is delivered by a clinical triangle:

  • Bedside nurses, who provide continuous observation, early detection, and human presence.
  • Hospitalists and floor doctors, who integrate evolving data into daily diagnostic and treatment decisions.
  • Attending physicians, who carry longitudinal responsibility for diagnosis, care strategy, and outcomes.

When this triangle is overloaded, care quality degrades—not because clinicians are unskilled, but because attention is fragmented.

A central grievance in the strike is that too much clinical time is consumed by documentation, coordination, and compliance tasks that add little to patient outcomes. Nurses did not enter the profession to spend their best hours feeding data into systems. They entered it to observe, assess, comfort, and intervene. When that calling is crowded out by screens, burnout follows.


Why AI Raises Anxiety—and Why That Anxiety Is Rational

AI’s arrival in hospitals coincides with staffing shortages and cost containment mandates. That timing matters.

Clinicians are not primarily afraid that AI will replace bedside judgment. They are afraid it will be used to justify higher throughput without relief—the familiar logic of “you’re more efficient now, so you can handle more.”

From a labor perspective, that fear is rational. From a management perspective, the temptation is real. Efficiency gains are often absorbed invisibly into higher census, tighter schedules, or reduced staffing buffers.

But that path misunderstands where AI’s true value lies.


The Administrative Case for AI—Done Right

Hospital administrators are under intense pressure to control costs, reduce errors, and protect institutional reputation. Used correctly, AI directly serves those goals—not by replacing clinicians, but by reducing risk and increasing accuracy.

Consider what AI does well today and will do better soon:

  • Documentation accuracy and completeness
    AI-assisted charting reduces omissions, inconsistencies, and after-the-fact corrections—key drivers of malpractice exposure.
  • Early risk detection
    Pattern recognition across vitals, labs, and notes can flag deterioration earlier, allowing human intervention sooner.
  • Continuity and handoff clarity
    Clear summaries reduce miscommunication across shifts—a major source of adverse events.
  • Burnout reduction and retention
    A hospital known as a place where clinicians spend time with patients—not screens—retains staff more effectively. Turnover is expensive. Reputation matters.
  • Regulatory and payer confidence
    More consistent records and clearer clinical rationale improve audits, reviews, and reimbursement defensibility.

In short, AI used as an assistant improves care quality, risk management, and institutional stability—all core administrative objectives.


The Crucial Design Choice: Assistant or Multiplier

The disagreement is not about whether AI should exist. It is about what the efficiency dividend is used for.

If AI eliminates even 10% of non-clinical workload, that capacity can be treated in two ways:

  1. As a multiplier
    More patients per nurse, tighter staffing grids, higher alert volume.
  2. As an assistant
    More bedside observation, better diagnostics, calmer clinicians, lower error rates.

The first approach extracts value until the system breaks.
The second compounds value by protecting judgment.

Administrators who choose the second path are not indulging sentimentality; they are investing in accuracy, safety, and long-term workforce stability.


Why Nurses Are Right to Insist on Guardrails

Nurses’ calls for explicit contract language around AI are not anti-technology. They are pro-alignment.

They are asking for assurance that:

  • AI will reduce clerical burden, not increase patient ratios.
  • Human clinical judgment remains central and accountable.
  • Efficiency gains return as time and focus, not silent workload creep.

Absent those guarantees, skepticism is not obstruction—it is prudence.


The Deeper Truth: Why People Choose Their Professions

This dispute surfaces a deeper, universal truth.

Nurses did not fall in love with nursing to stare at documentation screens.
Doctors did not train for decades to chase alerts and reconcile notes.
Most professionals—across fields—did not choose their work to become data clerks.

They chose it to think, judge, create, and serve.


The End Note: This Is Not Just About Healthcare

What is happening in New York hospitals is a preview of what every profession is about to face.

Whether it is:

  • Nurses and physicians
  • Accountants and auditors
  • City secretaries and budget analysts
  • Engineers, planners, or consultants

The same question will arise:

When AI saves time, does that time go back to the human purpose of the profession—or is it absorbed as more output?

Institutions that answer this wisely will gain accuracy, loyalty, reputation, and resilience. Those that do not will experience faster burnout, higher turnover, and brittle systems masked as efficiency.

The New York nurses’ strike is not resisting the future.
It is negotiating the terms under which the future becomes sustainable.

And that negotiation—quietly or loudly—is coming for everyone.

The Day the iPhone Rewired the World

https://9to5mac.com/wp-content/uploads/sites/6/2022/01/steve-jobs-og-iphone.jpg?quality=82&strip=all&w=1600
https://cdn.osxdaily.com/wp-content/uploads/2017/01/original-iphone.jpg
https://www.macworld.com/wp-content/uploads/2025/10/original-iphone-2007-1.jpg?quality=50&strip=all

A collaboration between Lewis McLain & AI

On January 9, 2007, at Macworld in San Francisco, Steve Jobs walked onto the stage and delivered one of the most consequential product announcements in modern history. He framed it theatrically—three devices in one: an iPod, a phone, and an internet communicator. Then he paused, smiled, and revealed the trick. They were not three devices. They were one. The Apple iPhone had arrived.

What followed was not merely a successful product launch. It was a hinge moment—one that quietly reordered how humans interact with technology, with information, with each other, and even with themselves.


What Made the iPhone Event Different

The iPhone announcement mattered not because it was the first smartphone, but because it redefined what a phone was supposed to be.

At the time, the market was dominated by devices with physical keyboards, styluses, nested menus, and clunky mobile browsers. BlackBerry owned business communication. Nokia owned scale. Microsoft owned enterprise software assumptions. Apple owned none of these markets.

Yet the iPhone introduced several radical departures:

  • Multi-touch as the interface
    Fingers replaced keyboards and styluses. Pinch, swipe, and tap turned abstract computing into something instinctive and physical.
  • A real web browser
    Not a stripped-down “mobile” version of the internet, but the actual web—zoomable, readable, usable.
  • Software-first design
    The device wasn’t defined by buttons or ports but by software, animations, and user experience. Hardware existed to serve software, not the other way around.
  • A unified ecosystem vision
    The iPhone was conceived not as a gadget but as a node—connected to iTunes, Macs, carriers, and eventually an App Store that did not yet exist but was already implied.

Jobs did not spend the keynote talking about specs. He talked about experience. That choice alone signaled a philosophical shift in consumer technology.


The Immediate Shockwave

The reaction was mixed. Some praised the elegance. Others mocked the lack of a physical keyboard, the high price, and the absence of third-party apps at launch. Industry leaders dismissed it as a niche luxury device.

Those critiques aged poorly.

Within a few years, nearly every phone manufacturer had abandoned keyboards. Touchscreens became universal. Mobile operating systems replaced desktop metaphors. The skeptics were not foolish—they were anchored to the past in a moment when the ground moved.


How the iPhone Changed Everyday Life

The iPhone did not just change phones. It collapsed entire categories of human activity into a pocket-sized slab of glass.

Communication shifted from voice-first to text, image, and video-first. Navigation moved from paper maps and memory to GPS-by-default. Photography became constant and social rather than occasional and deliberate. The internet ceased to be a place you “went” and became something you carried.

Several deeper changes followed:

  • Time became fragmented
    Micro-moments—checking, scrolling, responding—filled the spaces once occupied by waiting, boredom, or reflection.
  • Attention became a resource
    Notifications, feeds, and apps competed continuously for awareness, reshaping media, advertising, and even politics.
  • Work escaped the office
    Email, documents, approvals, and meetings followed people everywhere, blurring boundaries between professional and personal life.
  • Memory outsourced itself
    Phone numbers, directions, appointments, even photographs replaced recall with retrieval.

The iPhone did not force these changes, but it made them frictionless, and friction is often the last defense of human habits.


The App Store Effect

A year later, Apple launched the App Store, and the iPhone’s impact accelerated exponentially. Developers gained a global distribution platform overnight. Entire industries emerged—ride-sharing, mobile banking, food delivery, social media influencers, mobile gaming—built on the assumption that everyone carried a powerful computer at all times.

This was not just technological leverage. It was economic leverage.

Apple positioned itself as the gatekeeper of a new digital economy, collecting a share of transactions while letting others shoulder innovation risk. Few business models in history have been so scalable with so little marginal cost.


The Financial Transformation of Apple

Before the iPhone, Apple was a successful but niche computer company. After the iPhone, it became something else entirely.

The iPhone evolved into Apple’s single largest revenue driver, often accounting for roughly half of annual revenue in its peak years. More importantly, it pulled customers into a broader ecosystem—Macs, iPads, Apple Watch, AirPods, services, subscriptions—each reinforcing the others.

Apple’s profits followed accordingly:

  • Revenue grew from tens of billions annually to hundreds of billions
  • Gross margins remained unusually high for a hardware company
  • Cash reserves swelled to levels rivaling national treasuries
  • Apple became, at times, the most valuable company in the world

The genius was not just the device. It was the integration—hardware, software, services, and brand operating as a single system. Competitors could copy features, but not the whole machine.


The Long View

January 9, 2007, now looks less like a product launch and more like a civilizational inflection point. The iPhone compressed computing into daily life so completely that it is now difficult to remember what came before.

That power has brought wonder and convenience—and distraction, dependency, and new ethical dilemmas. Tools that shape attention inevitably shape culture.

Apple did not merely sell a phone that day. It sold a future—one we are still living inside, still arguing about, and still trying to understand.

Artificial Intelligence in City Government: From Adoption to Accountability

A Practical Framework for Innovation, Oversight, and Public Trust

A collaboration between Lewis McLain & AI – A Companion to the previous blog on AI

Artificial intelligence has moved from novelty to necessity in public institutions. What began as experimental tools for drafting documents or summarizing data is now embedded in systems that influence budgeting, service delivery, enforcement prioritization, procurement screening, and public communication. Cities are discovering that AI is no longer optional—but neither is governance.

This essay unifies two truths that are often treated as competing ideas but must now be held together:

  1. AI adoption is inevitable and necessary if cities are to remain operationally effective and fiscally sustainable.
  2. AI oversight is now unavoidable wherever systems influence decisions affecting people, rights, or public trust.

These are not contradictions. They are sequential realities. Adoption without governance leads to chaos. Governance without adoption leads to irrelevance. The task for modern city leadership is to do both—intentionally.

I. The Adoption Imperative: AI as Municipal Infrastructure

Cities face structural pressures that are not temporary: constrained budgets, difficulty recruiting and retaining staff, growing service demands, and rising analytical complexity. AI tools offer a way to expand institutional capacity without expanding payrolls at the same rate.

Common municipal uses already include:

  • Drafting ordinances, reports, and correspondence
  • Summarizing public input and staff analysis
  • Forecasting revenues, expenditures, and service demand
  • Supporting customer service through chat or triage tools
  • Enhancing internal research and analytics

In this sense, AI is not a gadget. It is infrastructure, comparable to ERP systems, GIS, or financial modeling platforms. Cities that delay adoption will find themselves less capable, less competitive, and more expensive to operate.

Adoption, however, is not merely technical. AI reshapes workflows, compresses tasks, and changes how work is performed. Over time, this may alter staffing needs. The question is not whether AI will change city operations—it already is. The question is whether those changes are guided or accidental.

II. The Oversight Imperative: Why Governance Is Now Required

As AI systems move beyond internal productivity and begin to influence decisions—directly or indirectly—oversight becomes essential.

AI systems are now used, or embedded through vendors, in areas such as:

  • Permit or inspection prioritization
  • Eligibility screening for programs or services
  • Vendor risk scoring and procurement screening
  • Enforcement triage
  • Public safety analytics

When AI recommendations shape outcomes, even if a human signs off, accountability cannot be vague. Errors at scale, opaque logic, and undocumented assumptions create legal exposure and erode public trust faster than traditional human error.

Oversight is required because:

  • Scale magnifies mistakes: a single flaw can affect thousands before detection.
  • Opacity undermines legitimacy: residents are less forgiving of decisions they cannot understand.
  • Legal scrutiny is increasing: courts and legislatures are paying closer attention to algorithmic decision-making.

Oversight is not about banning AI. It is about ensuring AI is used responsibly, transparently, and under human control.

III. Bridging Adoption and Oversight: A Two-Speed Framework

The tension between “move fast” and “govern carefully” dissolves once AI uses are separated by risk.

Low-Risk, Internal AI Uses

Examples include drafting, summarization, forecasting, research, and internal analytics.

Approach:
Adopt quickly, document lightly, train staff, and monitor outcomes.

Decision-Adjacent or High-Risk AI Uses

Examples include enforcement prioritization, eligibility determinations, public safety analytics, and procurement screening affecting vendors.

Approach:
Require review, documentation, transparency, and meaningful human oversight before deployment.

This two-speed framework allows cities to capture productivity benefits immediately while placing guardrails only where risk to rights, equity, or trust is real.

IV. Texas Context: Statewide Direction on AI Governance

The Texas Legislature reinforced this balanced approach through the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026. The law does not prohibit AI use. Instead, it establishes expectations for transparency, accountability, and prohibited practices—particularly for government entities.

Key elements include:

  • Disclosure when residents interact with AI systems
  • Prohibitions on social scoring by government
  • Restrictions on discriminatory AI use
  • Guardrails around biometric and surveillance applications
  • Civil penalties for unlawful or deceptive deployment
  • Creation of a statewide Artificial Intelligence Council

The message is clear: Texas expects governments to adopt AI responsibly—neither recklessly nor fearfully.

V. Implications for Cities and Transit Agencies

Cities are already using AI, often unknowingly, through vendor-provided software. Transit agencies face elevated exposure because they combine finance, enforcement, surveillance, and public safety.

The greatest risk is not AI itself, but uncontrolled AI:

  • Vendor-embedded algorithms without disclosure
  • No documented human accountability
  • No audit trail
  • No process for suspension or correction

Cities that act early reduce legal risk, preserve public trust, and maintain operational flexibility.

VI. Workforce Implications: Accurate and Defensible Language

AI will change how work is done over time. It would be inaccurate and irresponsible to claim otherwise.

At the same time, AI does not mandate immediate workforce reductions. In public institutions, workforce impacts—if they occur—are most likely to happen gradually through:

  • Attrition
  • Reassignment
  • Retraining
  • Role redesign

Final staffing decisions remain with City leadership and City Council. AI is a tool for improving capacity and sustainability, not an automatic trigger for reductions.

Conclusion: Coherent, Accountable AI

AI adoption without governance invites chaos. Governance without adoption invites stagnation. Cities that succeed will do both—moving quickly where risk is low and governing carefully where risk is high.

This is not about technology hype. It is about institutional competence in a digital age.


Appendix 1 — Texas Responsible Artificial Intelligence Governance Act (HB 149)

Legislature Online

                                                   H.B. No. 149

AN ACT

relating to regulation of the use of artificial intelligence systems in this state; providing civil penalties.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:

SECTION 1.  This Act may be cited as the Texas Responsible Artificial Intelligence Governance Act.

SECTION 2.  Section 503.001, Business & Commerce Code, is amended by amending Subsections (a) and (e) and adding Subsections (b-1) and (f) to read as follows:

(a)  In this section:

(1)  “Artificial intelligence system” has the meaning assigned by Section 551.001.

(2)  “Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.

(b-1)  For purposes of Subsection (b), an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate.

(e)  This section does not apply to:

(1)  voiceprint data retained by a financial institution or an affiliate of a financial institution, as those terms are defined by 15 U.S.C. Section 6809;

(2)  the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; or

(3)  the development or deployment of an artificial intelligence model or system for the purposes of:

(A)  preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;

(B)  preserving the integrity or security of a system; or

(C)  investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.

(f)  If a biometric identifier captured for the purpose of training an artificial intelligence system is subsequently used for a commercial purpose not described by Subsection (e), the person possessing the biometric identifier is subject to:

(1)  this section’s provisions for the possession and destruction of a biometric identifier; and

(2)  the penalties associated with a violation of this section.

SECTION 3.  Section 541.104(a), Business & Commerce Code, is amended to read as follows:

(a)  A processor shall adhere to the instructions of a controller and shall assist the controller in meeting or complying with the controller’s duties or requirements under this chapter, including:

(1)  assisting the controller in responding to consumer rights requests submitted under Section 541.051 by using appropriate technical and organizational measures, as reasonably practicable, taking into account the nature of processing and the information available to the processor;

(2)  assisting the controller with regard to complying with requirements relating to the security of processing personal data, and if applicable, the personal data collected, stored, and processed by an artificial intelligence system, as that term is defined by Section 551.001, and to the notification of a breach of security of the processor’s system under Chapter 521, taking into account the nature of processing and the information available to the processor; and

(3)  providing necessary information to enable the controller to conduct and document data protection assessments under Section 541.105.

SECTION 4.  Title 11, Business & Commerce Code, is amended by adding Subtitle D to read as follows:

SUBTITLE D.  ARTIFICIAL INTELLIGENCE PROTECTION

CHAPTER 551.  GENERAL PROVISIONS

Sec. 551.001.  DEFINITIONS.  In this subtitle:

(1)  “Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

(2)  “Consumer” means an individual who is a resident of this state acting only in an individual or household context.  The term does not include an individual acting in a commercial or employment context.

(3)  “Council” means the Texas Artificial Intelligence Council established under Chapter 554.

Sec. 551.002.  APPLICABILITY OF SUBTITLE.  This subtitle applies only to a person who:

(1)  promotes, advertises, or conducts business in this state;

(2)  produces a product or service used by residents of this state; or

(3)  develops or deploys an artificial intelligence system in this state.

Sec. 551.003.  CONSTRUCTION AND APPLICATION OF SUBTITLE.  This subtitle shall be broadly construed and applied to promote its underlying purposes, which are to:

(1)  facilitate and advance the responsible development and use of artificial intelligence systems;

(2)  protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems;

(3)  provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and

(4)  provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.

CHAPTER 552.  ARTIFICIAL INTELLIGENCE PROTECTION

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 552.001.  DEFINITIONS.  In this chapter:

(1)  “Deployer” means a person who deploys an artificial intelligence system for use in this state.

(2)  “Developer” means a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state.

(3)  “Governmental entity” means any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state.  The term does not include:

(A)  a hospital district created under the Health and Safety Code or Article IX, Texas Constitution; or

(B)  an institution of higher education, as defined by Section 61.003, Education Code, including any university system or any component institution of the system.

Sec. 552.002.  CONSTRUCTION OF CHAPTER.  This chapter may not be construed to:

(1)  impose a requirement on a person that adversely affects the rights or freedoms of any person, including the right of free speech; or

(2)  authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance.

Sec. 552.003.  LOCAL PREEMPTION.  This chapter supersedes and preempts any ordinance, resolution, rule, or other regulation adopted by a political subdivision regarding the use of artificial intelligence systems.

SUBCHAPTER B. DUTIES AND PROHIBITIONS ON USE OF ARTIFICIAL INTELLIGENCE

Sec. 552.051.  DISCLOSURE TO CONSUMERS.  (a)  In this section, “health care services” means services related to human health or to the diagnosis, prevention, or treatment of a human disease or impairment provided by an individual licensed, registered, or certified under applicable state or federal law to provide those services.

(b)  A governmental agency that makes available an artificial intelligence system intended to interact with consumers shall disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system.

(c)  A person is required to make the disclosure under Subsection (b) regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.

(d)  A disclosure under Subsection (b):

(1)  must be clear and conspicuous;

(2)  must be written in plain language; and

(3)  may not use a dark pattern, as that term is defined by Section 541.001.

(e)  A disclosure under Subsection (b) may be provided by using a hyperlink to direct a consumer to a separate Internet web page.

(f)  If an artificial intelligence system is used in relation to health care service or treatment, the provider of the service or treatment shall provide the disclosure under Subsection (b) to the recipient of the service or treatment or the recipient’s personal representative not later than the date the service or treatment is first provided, except in the case of emergency, in which case the provider shall provide the required disclosure as soon as reasonably possible.

Sec. 552.052.  MANIPULATION OF HUMAN BEHAVIOR.  A person may not develop or deploy an artificial intelligence system in a manner that intentionally aims to incite or encourage a person to:

(1)  commit physical self-harm, including suicide;

(2)  harm another person; or

(3)  engage in criminal activity.

Sec. 552.053.  SOCIAL SCORING.  A governmental entity may not use or deploy an artificial intelligence system that evaluates or classifies a natural person or group of natural persons based on social behavior or personal characteristics, whether known, inferred, or predicted, with the intent to calculate or assign a social score or similar categorical estimation or valuation of the person or group of persons that results or may result in:

(1)  detrimental or unfavorable treatment of a person or group of persons in a social context unrelated to the context in which the behavior or characteristics were observed or noted;

(2)  detrimental or unfavorable treatment of a person or group of persons that is unjustified or disproportionate to the nature or gravity of the observed or noted behavior or characteristics; or

(3)  the infringement of any right guaranteed under the United States Constitution, the Texas Constitution, or state or federal law.

Sec. 552.054.  CAPTURE OF BIOMETRIC DATA.  (a)  In this section, “biometric data” means data generated by automatic measurements of an individual’s biological characteristics.  The term includes a fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual.  The term does not include a physical or digital photograph or data generated from a physical or digital photograph, a video or audio recording or data generated from a video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations under the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. Section 1320d et seq.).

(b)  A governmental entity may not develop or deploy an artificial intelligence system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law.

(c)  A violation of Section 503.001 is a violation of this section.

Sec. 552.055.  CONSTITUTIONAL PROTECTION.  (a)  A person may not develop or deploy an artificial intelligence system with the sole intent for the artificial intelligence system to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution.

(b)  This section is remedial in purpose and may not be construed to create or expand any right guaranteed by the United States Constitution.

Sec. 552.056.  UNLAWFUL DISCRIMINATION.  (a)  In this section:

(1)  “Financial institution” has the meaning assigned by Section 201.101, Finance Code.

(2)  “Insurance entity” means:

(A)  an entity described by Section 82.002(a), Insurance Code;

(B)  a fraternal benefit society regulated under Chapter 885, Insurance Code; or

(C)  the developer of an artificial intelligence system used by an entity described by Paragraph (A) or (B).

(3)  “Protected class” means a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.

(b)  A person may not develop or deploy an artificial intelligence system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.

(c)  For purposes of this section, a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.

(d)  This section does not apply to an insurance entity for purposes of providing insurance services if the entity is subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance.

(e)  A federally insured financial institution is considered to be in compliance with this section if the institution complies with all federal and state banking laws and regulations.

Sec. 552.057.  CERTAIN SEXUALLY EXPLICIT CONTENT AND CHILD PORNOGRAPHY.  A person may not:

(1)  develop or distribute an artificial intelligence system with the sole intent of producing, assisting or aiding in producing, or distributing:

(A)  visual material in violation of Section 43.26, Penal Code; or

(B)  deep fake videos or images in violation of Section 21.165, Penal Code; or

(2)  intentionally develop or distribute an artificial intelligence system that engages in text-based conversations that simulate or describe sexual conduct, as that term is defined by Section 43.25, Penal Code, while impersonating or imitating a child younger than 18 years of age.

SUBCHAPTER C.  ENFORCEMENT

Sec. 552.101.  ENFORCEMENT AUTHORITY.  (a)  The attorney general has exclusive authority to enforce this chapter, except to the extent provided by Section 552.106.

(b)  This chapter does not provide a basis for, and is not subject to, a private right of action for a violation of this chapter or any other law.

Sec. 552.102.  INFORMATION AND COMPLAINTS.  The attorney general shall create and maintain an online mechanism on the attorney general’s Internet website through which a consumer may submit a complaint under this chapter to the attorney general.

Sec. 552.103.  INVESTIGATIVE AUTHORITY.  (a)  If the attorney general receives a complaint through the online mechanism under Section 552.102 alleging a violation of this chapter, the attorney general may issue a civil investigative demand to determine if a violation has occurred.  The attorney general shall issue demands in accordance with and under the procedures established under Section 15.10.

(b)  The attorney general may request from the person reported through the online mechanism, pursuant to a civil investigative demand issued under Subsection (a):

(1)  a high-level description of the purpose, intended use, deployment context, and associated benefits of the artificial intelligence system with which the person is affiliated;

(2)  a description of the type of data used to program or train the artificial intelligence system;

(3)  a high-level description of the categories of data processed as inputs for the artificial intelligence system;

(4)  a high-level description of the outputs produced by the artificial intelligence system;

(5)  any metrics the person uses to evaluate the performance of the artificial intelligence system;

(6)  any known limitations of the artificial intelligence system;

(7)  a high-level description of the post-deployment monitoring and user safeguards the person uses for the artificial intelligence system, including, if the person is a deployer, the oversight, use, and learning process established by the person to address issues arising from the system’s deployment; or

(8)  any other relevant documentation reasonably necessary for the attorney general to conduct an investigation under this section.

Sec. 552.104.  NOTICE OF VIOLATION; OPPORTUNITY TO CURE.  (a)  If the attorney general determines that a person has violated or is violating this chapter, the attorney general shall notify the person in writing of the determination, identifying the specific provisions of this chapter the attorney general alleges have been or are being violated.

(b)  The attorney general may not bring an action against the person:

(1)  before the 60th day after the date the attorney general provides the notice under Subsection (a); or

(2)  if, before the 60th day after the date the attorney general provides the notice under Subsection (a), the person:

(A)  cures the identified violation; and

(B)  provides the attorney general with a written statement that the person has:

(i)  cured the alleged violation;

(ii)  provided supporting documentation to show the manner in which the person cured the violation; and

(iii)  made any necessary changes to internal policies to reasonably prevent further violation of this chapter.

Sec. 552.105.  CIVIL PENALTY; INJUNCTION.  (a)  A person who violates this chapter and does not cure the violation under Section 552.104 is liable to this state for a civil penalty in an amount of:

(1)  for each violation the court determines to be curable or a breach of a statement submitted to the attorney general under Section 552.104(b)(2), not less than $10,000 and not more than $12,000;

(2)  for each violation the court determines to be uncurable, not less than $80,000 and not more than $200,000; and

(3)  for a continued violation, not less than $2,000 and not more than $40,000 for each day the violation continues.

(b)  The attorney general may bring an action in the name of this state to:

(1)  collect a civil penalty under this section;

(2)  seek injunctive relief against further violation of this chapter; and

(3)  recover attorney’s fees and reasonable court costs or other investigative expenses.

(c)  There is a rebuttable presumption that a person used reasonable care as required under this chapter.

(d)  A defendant in an action under this section may seek an expedited hearing or other process, including a request for declaratory judgment, if the person believes in good faith that the person has not violated this chapter.

(e)  A defendant in an action under this section may not be found liable if:

(1)  another person uses the artificial intelligence system affiliated with the defendant in a manner prohibited by this chapter; or

(2)  the defendant discovers a violation of this chapter through:

(A)  feedback from a developer, deployer, or other person who believes a violation has occurred;

(B)  testing, including adversarial testing or red-team testing;

(C)  following guidelines set by applicable state agencies; or

(D)  if the defendant substantially complies with the most recent version of the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” published by the National Institute of Standards and Technology or another nationally or internationally recognized risk management framework for artificial intelligence systems, an internal review process.

(f)  The attorney general may not bring an action to collect a civil penalty under this section against a person for an artificial intelligence system that has not been deployed.

Sec. 552.106.  ENFORCEMENT ACTIONS BY STATE AGENCIES.  (a)  A state agency may impose sanctions against a person licensed, registered, or certified by that agency for a violation of Subchapter B if:

(1)  the person has been found in violation of this chapter under Section 552.105; and

(2)  the attorney general has recommended additional enforcement by the applicable agency.

(b)  Sanctions under this section may include:

(1)  suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and

(2)  a monetary penalty not to exceed $100,000.

CHAPTER 553.  ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 553.001.  DEFINITIONS.  In this chapter:

(1)  “Applicable agency” means a department of this state established by law to regulate certain types of business activity in this state and the people engaging in that business, including the issuance of licenses and registrations, that the department determines would regulate a program participant if the person were not operating under this chapter.

(2)  “Department” means the Texas Department of Information Resources.

(3)  “Program” means the regulatory sandbox program established under this chapter that allows a person, without being licensed or registered under the laws of this state, to test an artificial intelligence system for a limited time and on a limited basis.

(4)  “Program participant” means a person whose application to participate in the program is approved and who may test an artificial intelligence system under this chapter.

SUBCHAPTER B.  SANDBOX PROGRAM FRAMEWORK

Sec. 553.051.  ESTABLISHMENT OF SANDBOX PROGRAM.  (a)  The department, in consultation with the council, shall create a regulatory sandbox program that enables a person to obtain legal protection and limited access to the market in this state to test innovative artificial intelligence systems without obtaining a license, registration, or other regulatory authorization.

(b)  The program is designed to:

(1)  promote the safe and innovative use of artificial intelligence systems across various sectors including healthcare, finance, education, and public services;

(2)  encourage responsible deployment of artificial intelligence systems while balancing the need for consumer protection, privacy, and public safety;

(3)  provide clear guidelines for a person who develops an artificial intelligence system to test systems while certain laws and regulations related to the testing are waived or suspended; and

(4)  allow a person to engage in research, training, testing, or other pre-deployment activities to develop an artificial intelligence system.

(c)  The attorney general may not file or pursue charges against a program participant for violation of a law or regulation waived under this chapter that occurs during the testing period.

(d)  A state agency may not file or pursue punitive action against a program participant, including the imposition of a fine or the suspension or revocation of a license, registration, or other authorization, for violation of a law or regulation waived under this chapter that occurs during the testing period.

(e)  Notwithstanding Subsections (c) and (d), the requirements of Subchapter B, Chapter 552, may not be waived, and the attorney general or a state agency may file or pursue charges or action against a program participant who violates that subchapter.

Sec. 553.052.  APPLICATION FOR PROGRAM PARTICIPATION.  (a)  A person must obtain approval from the department and any applicable agency before testing an artificial intelligence system under the program.

(b)  The department by rule shall prescribe the application form.  The form must require the applicant to:

(1)  provide a detailed description of the artificial intelligence system the applicant desires to test in the program, and its intended use;

(2)  include a benefit assessment that addresses potential impacts on consumers, privacy, and public safety;

(3)  describe the applicant’s plan for mitigating any adverse consequences that may occur during the test; and

(4)  provide proof of compliance with any applicable federal artificial intelligence laws and regulations.

Sec. 553.053.  DURATION AND SCOPE OF PARTICIPATION.  (a)  A program participant approved by the department and each applicable agency may test and deploy an artificial intelligence system under the program for a period of not more than 36 months.

(b)  The department may extend a test under this chapter if the department finds good cause for the test to continue.

Sec. 553.054.  EFFICIENT USE OF RESOURCES.  The department shall coordinate the activities under this subchapter and any other law relating to artificial intelligence systems to ensure efficient system implementation and to streamline the use of department resources, including information sharing and personnel.

SUBCHAPTER C.  OVERSIGHT AND COMPLIANCE

Sec. 553.101.  COORDINATION WITH APPLICABLE AGENCY.  (a)  The department shall coordinate with all applicable agencies to oversee the operation of a program participant.

(b)  The council or an applicable agency may recommend to the department that a program participant be removed from the program if the council or applicable agency finds that the program participant’s artificial intelligence system:

(1)  poses an undue risk to public safety or welfare;

(2)  violates any federal law or regulation; or

(3)  violates any state law or regulation not waived under the program.

Sec. 553.102.  PERIODIC REPORT BY PROGRAM PARTICIPANT.  (a)  A program participant shall provide a quarterly report to the department.

(b)  The report shall include:

(1)  metrics for the artificial intelligence system’s performance;

(2)  updates on how the artificial intelligence system mitigates any risks associated with its operation; and

(3)  feedback from consumers and affected stakeholders that are using an artificial intelligence system tested under this chapter.

(c)  The department shall maintain confidentiality regarding the intellectual property, trade secrets, and other sensitive information it obtains through the program.

Sec. 553.103.  ANNUAL REPORT BY DEPARTMENT.  (a)  The department shall submit an annual report to the legislature.

(b)  The report shall include:

(1)  the number of program participants testing an artificial intelligence system in the program;

(2)  the overall performance and impact of artificial intelligence systems tested in the program; and

(3)  recommendations on changes to laws or regulations for future legislative consideration.

CHAPTER 554.  TEXAS ARTIFICIAL INTELLIGENCE COUNCIL

SUBCHAPTER A.  CREATION AND ORGANIZATION OF COUNCIL

Sec. 554.001.  CREATION OF COUNCIL.  (a)  The Texas Artificial Intelligence Council is created to:

(1)  ensure artificial intelligence systems in this state are ethical and developed in the public’s best interest;

(2)  ensure artificial intelligence systems in this state do not harm public safety or undermine individual freedoms by finding issues and making recommendations to the legislature regarding the Penal Code and Chapter 82, Civil Practice and Remedies Code;

(3)  identify existing laws and regulations that impede innovation in the development of artificial intelligence systems and recommend appropriate reforms;

(4)  analyze opportunities to improve the efficiency and effectiveness of state government operations through the use of artificial intelligence systems;

(5)  make recommendations to applicable state agencies regarding the use of artificial intelligence systems to improve the agencies’ efficiency and effectiveness;

(6)  evaluate potential instances of regulatory capture, including undue influence by technology companies or disproportionate burdens on smaller innovators caused by the use of artificial intelligence systems;

(7)  evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users through the use of artificial intelligence systems;

(8)  offer guidance and recommendations to the legislature on the ethical and legal use of artificial intelligence systems;

(9)  conduct and publish the results of a study on the current regulatory environment for artificial intelligence systems;

(10)  receive reports from the Department of Information Resources regarding the regulatory sandbox program under Chapter 553; and

(11)  make recommendations for improvements to the regulatory sandbox program under Chapter 553.

(b)  The council is administratively attached to the Department of Information Resources, and the department shall provide administrative support to the council as provided by this section.

(c)  The Department of Information Resources and the council shall enter into a memorandum of understanding detailing:

(1)  the administrative support the council requires from the department to fulfill the council’s purposes;

(2)  the reimbursement of administrative expenses to the department; and

(3)  any other provisions necessary to ensure the efficient operation of the council.

Sec. 554.002.  COUNCIL MEMBERSHIP.  (a)  The council is composed of seven members as follows:

(1)  three members of the public appointed by the governor;

(2)  two members of the public appointed by the lieutenant governor; and

(3)  two members of the public appointed by the speaker of the house of representatives.

(b)  Members of the council serve staggered four-year terms, with the terms of three or four members expiring every two years.

(c)  The governor shall appoint a chair from among the members, and the council shall elect a vice chair from its membership.

(d)  The council may establish an advisory board composed of individuals from the public who possess expertise directly related to the council’s functions, including technical, ethical, regulatory, and other relevant areas.

Sec. 554.003.  QUALIFICATIONS.  Members of the council must be Texas residents and have knowledge or expertise in one or more of the following areas:

(1)  artificial intelligence systems;

(2)  data privacy and security;

(3)  ethics in technology or law;

(4)  public policy and regulation;

(5)  risk management related to artificial intelligence systems;

(6)  improving the efficiency and effectiveness of governmental operations; or

(7)  anticompetitive practices and market fairness.

Sec. 554.004.  STAFF AND ADMINISTRATION.  The council may hire an executive director and other personnel as necessary to perform its duties.

SUBCHAPTER B.  POWERS AND DUTIES OF COUNCIL

Sec. 554.101.  ISSUANCE OF REPORTS.  (a)  The council may issue reports to the legislature regarding the use of artificial intelligence systems in this state.

(b)  The council may issue reports on:

(1)  the compliance of artificial intelligence systems in this state with the laws of this state;

(2)  the ethical implications of deploying artificial intelligence systems in this state;

(3)  data privacy and security concerns related to artificial intelligence systems in this state; or

(4)  potential liability or legal risks associated with the use of artificial intelligence systems in this state.

Sec. 554.102.  TRAINING AND EDUCATIONAL OUTREACH.  The council shall conduct training programs for state agencies and local governments on the use of artificial intelligence systems.

Sec. 554.103.  LIMITATION OF AUTHORITY.  The council may not:

(1)  adopt rules or promulgate guidance that is binding for any entity;

(2)  interfere with or override the operation of a state agency; or

(3)  perform a duty or exercise a power not granted by this chapter.

SECTION 5.  Section 325.011, Government Code, is amended to read as follows:

Sec. 325.011.  CRITERIA FOR REVIEW.  The commission and its staff shall consider the following criteria in determining whether a public need exists for the continuation of a state agency or its advisory committees or for the performance of the functions of the agency or its advisory committees:

(1)  the efficiency and effectiveness with which the agency or the advisory committee operates;

(2)(A)  an identification of the mission, goals, and objectives intended for the agency or advisory committee and of the problem or need that the agency or advisory committee was intended to address; and

(B)  the extent to which the mission, goals, and objectives have been achieved and the problem or need has been addressed;

(3)(A)  an identification of any activities of the agency in addition to those granted by statute and of the authority for those activities; and

(B)  the extent to which those activities are needed;

(4)  an assessment of authority of the agency relating to fees, inspections, enforcement, and penalties;

(5)  whether less restrictive or alternative methods of performing any function that the agency performs could adequately protect or provide service to the public;

(6)  the extent to which the jurisdiction of the agency and the programs administered by the agency overlap or duplicate those of other agencies, the extent to which the agency coordinates with those agencies, and the extent to which the programs administered by the agency can be consolidated with the programs of other state agencies;

(7)  the promptness and effectiveness with which the agency addresses complaints concerning entities or other persons affected by the agency, including an assessment of the agency’s administrative hearings process;

(8)  an assessment of the agency’s rulemaking process and the extent to which the agency has encouraged participation by the public in making its rules and decisions and the extent to which the public participation has resulted in rules that benefit the public;

(9)  the extent to which the agency has complied with:

(A)  federal and state laws and applicable rules regarding equality of employment opportunity and the rights and privacy of individuals; and

(B)  state law and applicable rules of any state agency regarding purchasing guidelines and programs for historically underutilized businesses;

(10)  the extent to which the agency issues and enforces rules relating to potential conflicts of interest of its employees;

(11)  the extent to which the agency complies with Chapters 551 and 552 and follows records management practices that enable the agency to respond efficiently to requests for public information;

(12)  the effect of federal intervention or loss of federal funds if the agency is abolished;

(13)  the extent to which the purpose and effectiveness of reporting requirements imposed on the agency justifies the continuation of the requirement; [and]

(14)  an assessment of the agency’s cybersecurity practices using confidential information available from the Department of Information Resources or any other appropriate state agency; and

(15)  an assessment of the agency’s use of artificial intelligence systems, as that term is defined by Section 551.001, Business & Commerce Code, in its operations and its oversight of the use of artificial intelligence systems by persons under the agency’s jurisdiction, and any related impact on the agency’s ability to achieve its mission, goals, and objectives, made using information available from the Department of Information Resources, the attorney general, or any other appropriate state agency.

SECTION 6.  Section 2054.068(b), Government Code, is amended to read as follows:

(b)  The department shall collect from each state agency information on the status and condition of the agency’s information technology infrastructure, including information regarding:

(1)  the agency’s information security program;

(2)  an inventory of the agency’s servers, mainframes, cloud services, and other information technology equipment;

(3)  identification of vendors that operate and manage the agency’s information technology infrastructure; [and]

(4)  any additional related information requested by the department; and

(5)  an evaluation of the use or considered use of artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, by each state agency.

SECTION 7.  Section 2054.0965(b), Government Code, is amended to read as follows:

(b)  Except as otherwise modified by rules adopted by the department, the review must include:

(1)  an inventory of the agency’s major information systems, as defined by Section 2054.008, and other operational or logistical components related to deployment of information resources as prescribed by the department;

(2)  an inventory of the agency’s major databases, artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, and applications;

(3)  a description of the agency’s existing and planned telecommunications network configuration;

(4)  an analysis of how information systems, components, databases, applications, and other information resources have been deployed by the agency in support of:

(A)  applicable achievement goals established under Section 2056.006 and the state strategic plan adopted under Section 2056.009;

(B)  the state strategic plan for information resources; and

(C)  the agency’s business objectives, mission, and goals;

(5)  agency information necessary to support the state goals for interoperability and reuse; and

(6)  confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources.

SECTION 8.  Not later than September 1, 2026, the attorney general shall post on the attorney general’s Internet website the information and online mechanism required by Section 552.102, Business & Commerce Code, as added by this Act.

SECTION 9.  (a)  Notwithstanding any other section of this Act, in a state fiscal year, a state agency to which this Act applies is not required to implement a provision found in another section of this Act that is drafted as a mandatory provision imposing a duty on the agency to take an action unless money is specifically appropriated to the agency for that fiscal year to carry out that duty.  The agency may implement the provision in that fiscal year to the extent other funding is available to the agency to do so.

(b)  If, as authorized by Subsection (a) of this section, the state agency does not implement the mandatory provision in a state fiscal year, the state agency, in its legislative budget request for the next state fiscal biennium, shall certify that fact to the Legislative Budget Board and include a written estimate of the costs of implementing the provision in each year of that next state fiscal biennium.

SECTION 10.  This Act takes effect January 1, 2026.

    President of the Senate           Speaker of the House      

I certify that H.B. No. 149 was passed by the House on April 23, 2025, by the following vote:  Yeas 146, Nays 3, 1 present, not voting; and that the House concurred in Senate amendments to H.B. No. 149 on May 30, 2025, by the following vote:  Yeas 121, Nays 17, 2 present, not voting.

______________________________

Chief Clerk of the House   

I certify that H.B. No. 149 was passed by the Senate, with amendments, on May 23, 2025, by the following vote:  Yeas 31, Nays 0.

______________________________

Secretary of the Senate   

APPROVED: __________________

                 Date       

          __________________

               Governor       


Appendix 2 — Model Ordinance: Responsible Use of Artificial Intelligence in City Operations

ORDINANCE NO. ______

AN ORDINANCE

relating to the responsible use of artificial intelligence systems by the City; establishing transparency, accountability, and oversight requirements; and providing for implementation and administration.

WHEREAS,

the City recognizes that artificial intelligence (“AI”) systems are increasingly used to improve operational efficiency, service delivery, data analysis, and internal workflows; and

WHEREAS,

the City further recognizes that certain uses of AI may influence decisions affecting residents, employees, vendors, or regulated parties and therefore require appropriate oversight; and

WHEREAS,

the City seeks to encourage responsible innovation while preserving public trust, transparency, and accountability; and

WHEREAS,

the Texas Legislature has enacted the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026, establishing statewide standards for AI use by government entities; and

WHEREAS,

the City recognizes that the adoption of artificial intelligence tools may, over time, change how work is performed and how staffing needs are structured, and that any such impacts are expected to occur gradually through attrition, reassignment, or role redesign rather than immediate workforce reductions;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF __________, TEXAS:

Section 1. Definitions

For purposes of this Ordinance:

  1. “Artificial Intelligence System” means a computational system that uses machine learning, statistical modeling, or related techniques to perform tasks normally associated with human intelligence, including analysis, prediction, classification, content generation, or prioritization.
  2. “Decision-Adjacent AI” means an AI system that materially influences, prioritizes, or recommends outcomes related to enforcement, eligibility, allocation of resources, personnel actions, procurement decisions, or public services, even if final decisions are made by a human.
  3. “High-Risk AI Use” means deployment of an AI system that directly or indirectly affects individual rights, access to services, enforcement actions, or legally protected interests.
  4. “Department” means any City department, office, division, or agency.

Section 2. Permitted Use of Artificial Intelligence

(a) Internal Productivity Uses. Departments may deploy AI systems for internal productivity and analytical purposes, including but not limited to:

  • Drafting and summarization of documents
  • Data analysis and forecasting
  • Workflow automation
  • Research and internal reporting
  • Customer-service chat tools providing general information (with disclaimers as appropriate)

Such uses shall not require prior Council approval but shall be subject to internal documentation requirements.

(b) Decision-Adjacent Uses. AI systems that influence or support decisions affecting residents, employees, vendors, or regulated entities may be deployed only in accordance with Sections 3 and 4 of this Ordinance.

Section 3. Prohibited Uses

No Department shall deploy or use an AI system that:

  1. Performs social scoring of individuals or groups based on behavior, personal traits, or reputation for the purpose of denying services, benefits, or rights;
  2. Intentionally discriminates against a protected class in violation of state or federal law;
  3. Generates or deploys biometric identification or surveillance in violation of constitutional protections;
  4. Produces or facilitates unlawful deep-fake or deceptive content;
  5. Operates as a fully automated decision-making system without meaningful human review in matters affecting legal rights or obligations.

Section 4. Oversight and Approval for High-Risk AI Uses

(a) Inventory Requirement. The City Manager shall maintain a centralized AI Systems Inventory identifying:

  • Each AI system in use
  • The Department deploying the system
  • The system’s purpose
  • Whether the use is classified as high-risk

(b) Approval Process. Prior to deployment of any High-Risk AI Use, the Department must:

  1. Submit a written justification describing the system’s purpose and scope;
  2. Identify the data sources used by the system;
  3. Describe human oversight mechanisms;
  4. Obtain approval from:
    • The City Manager (or designee), and
    • The City Attorney for legal compliance review.

(c) Human Accountability. Each AI system shall have a designated human owner responsible for:

  • Monitoring performance
  • Responding to errors or complaints
  • Suspending use if risks are identified

Section 5. Transparency and Public Disclosure

(a) Disclosure to the Public. When a City AI system interacts directly with residents, the City shall provide clear notice that the interaction involves AI.

(b) Public Reporting. The City shall publish annually:

  • A summary of AI systems in use
  • The general purposes of high-risk AI systems
  • Contact information for public inquiries

No proprietary or security-sensitive information shall be disclosed.

Section 6. Procurement and Vendor Requirements

All City contracts involving AI systems shall, where applicable:

  1. Require disclosure of AI functions;
  2. Prohibit undisclosed algorithmic decision-making;
  3. Allow the City to audit or review AI system outputs relevant to City operations;
  4. Require vendors to notify the City of material changes to AI functionality.

Section 7. Review and Sunset

(a) Periodic Review. High-risk AI systems shall be reviewed at least annually to assess:

  • Accuracy
  • Bias
  • Continued necessity
  • Compliance with this Ordinance

(b) Sunset Authority. The City Manager may suspend or terminate use of any AI system that poses unacceptable risk or fails compliance review.

Section 8. Training

The City shall provide appropriate training to employees involved in:

  • Deploying AI systems
  • Supervising AI-assisted workflows
  • Interpreting AI-generated outputs

Section 9. Severability

If any provision of this Ordinance is held invalid, such invalidity shall not affect the remaining provisions.

Section 10. Effective Date

This Ordinance shall take effect immediately upon adoption.


Appendix 3 — City Manager Administrative Regulation: Responsible Use of Artificial Intelligence

ADMINISTRATIVE REGULATION NO. ___

Subject: Responsible Use of Artificial Intelligence (AI) in City Operations
Authority: Ordinance No. ___ (Responsible Use of Artificial Intelligence)
Issued by: City Manager
Effective Date: __________

1. Purpose

This Administrative Regulation establishes operational procedures for the responsible deployment, oversight, and monitoring of artificial intelligence (AI) systems used by the City, consistent with adopted Council policy and applicable state law.

The intent is to:

  • Enable rapid adoption of AI for productivity and service delivery;
  • Ensure transparency and accountability for higher-risk uses; and
  • Protect the City, employees, and residents from unintended consequences.

2. Scope

This regulation applies to all City departments, offices, and divisions that:

  • Develop, procure, deploy, or use AI systems; or
  • Rely on vendor-provided software that includes AI functionality.

3. AI System Classification

Departments shall classify AI systems into one of the following categories:

A. Tier 1 — Internal Productivity AI

Examples:

  • Document drafting and summarization
  • Data analysis and forecasting
  • Internal research and reporting
  • Workflow automation

Oversight Level:

  • Department-level approval
  • Registration in AI Inventory

B. Tier 2 — Decision-Adjacent AI

Examples:

  • Permit or inspection prioritization
  • Vendor or application risk scoring
  • Resource allocation recommendations
  • Enforcement or compliance triage

Oversight Level:

  • City Manager approval
  • Legal review
  • Annual performance review

C. Tier 3 — High-Risk AI

Examples:

  • AI influencing enforcement actions
  • Eligibility determinations
  • Public safety analytics
  • Biometric or surveillance tools

Oversight Level:

  • City Manager approval
  • City Attorney review
  • Documented human-in-the-loop controls
  • Annual audit and Council notification

4. AI Systems Inventory

The City Manager’s Office shall maintain a centralized AI Systems Inventory, which includes:

  • System name and vendor
  • Department owner
  • Purpose and classification tier
  • Date of deployment
  • Oversight requirements

Departments shall update the inventory prior to deploying any new AI system.

5. Approval Process

A. Tier 1 Systems

  • Approved by Department Director
  • Registered in inventory

B. Tier 2 and Tier 3 Systems

Departments must submit:

  1. A description of the system and intended use
  2. Data sources and inputs
  3. Description of human oversight
  4. Risk mitigation measures

Approval required from:

  • City Manager (or designee)
  • City Attorney (for legal compliance)

6. Human Oversight & Accountability

Each AI system shall have a designated System Owner responsible for:

  • Monitoring system outputs
  • Responding to errors or complaints
  • Suspending use if risks emerge
  • Coordinating audits or reviews

No AI system may operate as a fully autonomous decision-maker for actions affecting legal rights or obligations.

7. Vendor & Procurement Controls

Procurement involving AI systems shall:

  • Identify AI functionality explicitly in solicitations
  • Require vendors to disclose material AI updates
  • Prohibit undisclosed algorithmic decision-making
  • Preserve City audit and review rights

8. Monitoring, Review & Sunset

  • Tier 2 and Tier 3 systems shall undergo annual review.
  • Systems may be suspended or sunset if:
    • Accuracy degrades
    • Bias is identified
    • Legal risk increases
    • The system no longer serves a defined purpose

9. Training

Departments deploying AI shall ensure appropriate staff training covering:

  • Proper interpretation of AI outputs
  • Limitations of AI systems
  • Escalation and error-handling procedures

10. Reporting to Council

The City Manager shall provide Council with:

  • An annual summary of AI systems in use
  • Identification of Tier 3 (High-Risk) systems
  • Any material incidents or corrective actions

11. Effective Date

This Administrative Regulation is effective immediately upon issuance.

12. Workforce Considerations

The use of artificial intelligence systems may change job functions and workflows over time. Departments shall:

  • Use AI to augment employee capabilities wherever possible;
  • Prioritize retraining, reassignment, and natural attrition when workflows change;
  • Coordinate with Human Resources before deploying AI systems that materially alter job duties; and
  • Recognize that long-term staffing impacts, if any, remain subject to City Manager and City Council authority.

Appendix 4 — Public-Facing FAQ: Responsible Use of Artificial Intelligence in City Operations

What is this ordinance about?

This ordinance establishes clear rules for how the City may use artificial intelligence (AI) tools. It allows the City to use modern technology to improve efficiency and service delivery while ensuring that higher-risk uses are transparent, accountable, and overseen by people.

Is the City already using artificial intelligence?

Yes. Like most modern organizations, the City already uses limited AI-enabled tools for tasks such as document drafting, data analysis, customer service support, and vendor-provided software systems.

This ordinance ensures those tools are used consistently and responsibly.

Is this ordinance banning artificial intelligence?

No.
The ordinance does not ban AI. It encourages responsible adoption of AI for productivity and internal efficiency while placing guardrails on uses that could affect people’s rights or access to services.

Why is the City adopting rules now?

AI tools are becoming more common and more capable. Clear rules help ensure:

  • Transparency in how AI is used
  • Accountability for outcomes
  • Compliance with new Texas law
  • Public trust in City operations

The Texas Legislature recently enacted statewide standards for AI use by government entities, and this ordinance aligns the City with those expectations.

Will artificial intelligence affect City jobs?

AI may change how work is done over time, just as previous technologies have.

This ordinance does not authorize immediate workforce reductions. Any long-term impacts are expected to occur gradually and, where possible, through:

  • Natural attrition
  • Reassignment
  • Retraining
  • Changes in job duties

Final staffing decisions remain with City leadership and City Council.

Will AI replace City employees?

AI tools are intended to assist employees, not replace human judgment. For higher-risk uses, the ordinance requires meaningful human oversight and accountability.

Can AI make decisions about me automatically?

No.
The ordinance prohibits fully automated decision-making that affects legal rights, enforcement actions, or access to services without human review.

AI may provide information or recommendations, but people remain responsible for decisions.

Will the City use AI for surveillance or facial recognition?

The ordinance prohibits AI uses that violate constitutional protections, including improper biometric surveillance.

Any use of biometric or surveillance-related AI would require strict legal review and compliance with state and federal law.

How will I know if I’m interacting with AI?

If the City uses AI systems that interact directly with residents, the City must clearly disclose that you are interacting with an AI system.

Does this apply to police or public safety?

Yes.
AI tools used in public safety contexts are considered higher-risk and require additional review, approval, and oversight. AI systems may not independently make enforcement decisions.

Who is responsible if an AI system makes a mistake?

Each AI system has a designated City employee responsible for monitoring its use, addressing errors, and suspending the system if necessary.

Responsibility remains with the City—not the software.

Will the public be able to see how AI is used?

Yes.
The City will publish an annual summary describing:

  • The types of AI systems in use
  • Their general purpose
  • How residents can ask questions or raise concerns

Sensitive or proprietary information will not be disclosed.

Does this create a new board or bureaucracy?

No.
Oversight is handled through existing City leadership and administrative structures.

Is there a cost to adopting this ordinance?

There is no direct cost associated with adoption. Over time, responsible AI use may help control costs by improving productivity and efficiency.

How often will this policy be reviewed?

Higher-risk AI systems are reviewed annually. The ordinance itself may be updated as technology and law evolve.

Who can I contact with questions or concerns?

Residents may contact the City Manager’s Office or submit inquiries through the City’s website. Information on AI use and reporting channels will be publicly available.

Bottom Line

This ordinance ensures the City:

  • Uses modern tools responsibly
  • Maintains human accountability
  • Protects public trust
  • Aligns with Texas law
  • Adapts thoughtfully to technological change

The Municipal & Business Workquake of 2026: Why Cities Must Redesign Roles Now—Before Attrition Does It for Them

A collaboration between Lewis McLain & AI

Cities are about to experience an administrative shift that will look nothing like a “tech revolution” and nothing like a classic workforce reduction. It will arrive as a workquake: a sudden drop in the labor required to complete routine tasks across multiple departments, driven by AI systems that can ingest documents, apply rules, assemble outputs, and draft narratives at scale.

The danger is not that cities will replace everyone with software. The danger is more subtle and far more likely: cities will allow AI to hollow out core functions unintentionally, through non-replacement hiring, scattered tool adoption, and informal workflow shortcuts—until the organization’s accountability structure no longer matches the work being done.

In 2026, the right posture is not fascination or fear. It is proactive redesign.


I. The Real Change: Task Takeover, Not Job Replacement

Municipal roles often look “human” because they involve public trust, compliance, and service. But much of the day-to-day work inside those roles is structured:

  • collecting inputs
  • applying policy checklists
  • preparing standardized packets
  • producing routine reports
  • tracking deadlines
  • drafting summaries
  • reconciling variances
  • adding narrative to numbers

Those tasks are precisely what modern AI systems now handle with speed and consistency. What remains human is still vital—but it is narrower: judgment, discretion, ethics, and accountability.

That creates the same pattern across departments:

  • the production layer shrinks rapidly
  • the review and exception layer becomes the job

Cities that don’t define this shift early will experience it late—as a staffing and governance crisis.


II. Example- City Secretary: Where Governance Work Becomes Automated

The city secretary function sits at the center of formal governance: agendas, minutes, public notices, records, ordinances, and elections. Much of the labor in this area is procedural and document-heavy.

Tasks likely to be absorbed quickly

  • Agenda assembly from departmental submissions
  • Packet compilation and formatting
  • Deadline tracking for posting and notices
  • Records indexing and retrieval
  • Draft minutes from audio/video with time stamps
  • Ordinance/resolution histories and cross-references

What shrinks

  • clerical assembly roles
  • manual transcription
  • routine records handling

What becomes more important

  • legal compliance judgment (Open Meetings, Public Information)
  • defensibility of the record
  • election integrity protocols
  • final human review of public-facing outputs

In other words: the city secretary role does not disappear. It becomes governance QA—with higher stakes and fewer support layers.


III. Example – Purchasing & Procurement: Where Process Becomes Automated Screening

Purchasing has always been a mix of routine compliance and high-risk discretion. AI hits the routine side first, fast.

Tasks likely to be absorbed quickly

  • quote comparisons and bid tabulations
  • price benchmarking against history and peers
  • contract template population
  • insurance/required-doc compliance checks
  • renewal tracking and vendor performance summaries
  • anomaly detection (odd pricing, split purchases, policy exceptions)

What shrinks

  • bid tabulators
  • quote chasers
  • contract formatting staff
  • clerical procurement roles

What becomes more important

  • vendor disputes and negotiations
  • integrity controls (conflicts, favoritism risk)
  • exception approvals with documented reasoning
  • strategic sourcing decisions

Procurement shifts from “processing” to risk-managed decisioning.


IV. Example – Budget Analysts: Where “Analysis” Separates from “Assembly”

Budget offices are often mistaken as purely analytical. In reality, a large share of work is assembly: gathering departmental submissions, normalizing formats, building tables, writing routine narratives, and explaining variances.

Tasks likely to be absorbed quickly

  • ingestion and normalization of department requests
  • enforcement of submission rules and formatting
  • auto-generated variance explanations
  • draft budget narratives (department summaries, highlights)
  • scenario tables (base, constrained, growth cases)
  • continuous budget-to-actual reconciliation

What shrinks

  • entry-level budget analysts
  • table builders and narrative drafters
  • budget book production labor

What becomes more important

  • setting assumptions and policy levers
  • framing tradeoffs for leadership and council
  • long-range fiscal forecasting judgment
  • telling the truth clearly under political pressure

Budget staff shift from spreadsheet production to decision support and persuasion with integrity.


V. Example – Police & Fire Data Analysts: Where Reporting Becomes Real-Time Patterning

Public safety analytics is one of the most automatable municipal domains because it is data-rich, structured, and continuous. The “report builder” role is especially vulnerable.

Tasks likely to be absorbed quickly

  • automated monthly/quarterly performance reporting
  • response-time distribution analysis
  • hotspot mapping and geospatial summaries
  • staffing demand pattern detection
  • anomaly flagging (unusual patterns in calls, activity, response)
  • draft CompStat-style narratives and slide-ready briefings

What shrinks

  • manual report builders
  • map producers
  • dashboard-only roles
  • grant-report drafters relying on routine metrics

What becomes more important

  • human interpretation (what the pattern means operationally)
  • explaining limitations and avoiding false certainty
  • bias and fairness oversight
  • defensible analytics for court, public inquiry, or media scrutiny

Public safety analytics becomes less about producing charts and more about protecting truth and trust.


VI. Example – More Roles Next in Line

Permitting & Development Review

AI can quickly absorb:

  • completeness checks
  • code cross-referencing
  • workflow routing and status updates
  • templated staff reports

Humans remain essential for:

  • discretionary judgments
  • negotiation with applicants
  • interpreting ambiguous code situations
  • public-facing case management

HR Analysts

AI absorbs:

  • classification comparisons
  • market surveys and comp modeling
  • policy drafting and FAQ support

Humans remain for:

  • discipline, negotiations, sensitive cases
  • equity judgments and culture
  • leadership counsel and conflict resolution

Grants Management

AI absorbs:

  • opportunity scanning and matching
  • compliance calendars
  • draft narrative sections and attachments lists

Humans remain for:

  • strategy (which grants matter)
  • partnerships and commitments
  • risk management and audit defense

VII. The Practical Reality in Cities: Attrition Is the Mechanism

This won’t arrive as dramatic layoffs. It will arrive as:

  • hiring freezes
  • “we won’t backfill that position”
  • consolidation of roles
  • sudden expectations that one person can do what three used to do

If cities do nothing, AI will still be adopted—piecemeal, unevenly, and without governance redesign. That produces an organization with:

  • fewer people
  • unclear accountability
  • heavier compliance risk
  • fragile institutional memory

VIII. What “Proactive” Looks Like in 2026

Cities need to act immediately in four practical ways:

  1. Define what must remain human
    • elections integrity
    • public record defensibility
    • procurement exceptions and ethics
    • budget assumption-setting and council framing
    • public safety interpretation and bias oversight
  2. Separate production from review
    • let AI assemble
    • require humans to verify, approve, and own
  3. Rewrite job descriptions now
    • stop hiring for assembly work
    • hire for judgment, auditing, communication, and governance
  4. Build the governance layer
    • standards for AI outputs
    • audit trails
    • transparency policies
    • escalation rules
    • periodic review of AI-driven decisions

This is not an IT upgrade. It’s a redesign of how public authority is exercised.


Conclusion: The Choice Cities Face

Cities will adopt AI regardless—because the savings and speed will be undeniable. The only choice is whether the city adopts AI intentionally or accidentally.

If adopted intentionally, AI becomes:

  • a productivity tool
  • a compliance enhancer
  • a service accelerator

If adopted accidentally, AI becomes:

  • a quiet hollowing of institutional capacity
  • a transfer of control from policy to tool
  • and eventually a governance failure that will be blamed on people who never had the chance to redesign the system

2026 is early enough to steer the transition.
Waiting will not preserve the old model. It will only ensure the new one arrives without a plan.

End note: I usually spend a couple of days (minimum) completing the compilation of all my bank and credit card records, assigning a classification, summarizing and giving my CPA a complete set of documents. I uploaded the documents to AI, gave it instructions to prepare the package, answering a list of questions regarding reconciliation and classification issues. Two hours later, I had the full package with comparisons to past years from the returns I also uploaded. I was 100% ready on New Year’s Eve just waiting for the 1099’s to be sent to me by the end of January. Meanwhile, I have been having AI enhance and create a comprehensive accounting system with beautiful schedules like cash flow, taxation notes, checklists with new IRS rules and general help – more than I was getting from CPA. I’ll be able to actually take over the CPA duties. It’s just the start of the things I can turn over to AI while I become the editor and reviewer instead of the dreaded grunt work. LFM

The Infrastructure We Don’t See: Aging Gas Systems, Hidden Risks, and the Case for Annual Accountability

A collaboration between Lewis McLain & AI

It’s not if, but when!

Natural gas infrastructure is the most invisible—and therefore the most misunderstood—critical system in modern cities. Power lines are visible. Water mains announce themselves through pressure and flow. Roads crack and bridges age in plain sight. But gas lines remain buried, silent, and largely forgotten—until something goes wrong.

That invisibility is not benign. It creates a governance gap where responsibility is fragmented, risk is assumed rather than measured, and accountability is episodic instead of continuous. As cities grow denser, older, and more complex, that gap widens.

This essay makes a simple but demanding case: cities should require annual, technical accountability briefings from gas utilities and structured gas-safety evaluations for high-occupancy buildings—public and private—because safety is no longer assured by age, ownership boundaries, or regulatory compliance alone.

The ultimate question is not whether gas systems are regulated. They are.
The question is whether, at the local level, we are actually safer than we were a year ago.


I. The Aging Gas Network: A Technical Reality, Not a Hypothetical

Much of the U.S. gas distribution network was installed decades ago. While significant modernization has occurred, legacy materials—particularly cast iron and bare steel—still exist in pockets, often in the very neighborhoods where density, redevelopment, and consequence are highest.

These systems age in predictable ways:

  • Material degradation such as corrosion, joint failure, and metal fatigue
  • Ground movement from expansive soils, drought cycles, and freeze–thaw conditions
  • Pressure cycling driven by modern load variability
  • Construction interaction, including third-party damage during roadway, utility, and redevelopment projects

Technically speaking, aging is not a binary condition. It is a curve. Systems do not fail all at once; they fail where stress, material fatigue, and external disturbance intersect. Cities that approve redevelopment without understanding where those intersections lie are not managing risk—they are inheriting it.


II. Monitoring Is Better Than Ever—But It Is Not Replacement

Modern gas utilities deploy advanced leak detection technologies that did not exist a generation ago: mobile survey vehicles, high-sensitivity handheld sensors, aerial detection, and in some cases continuous monitoring.

Regulatory standards have improved as well. Leak surveys are more frequent, detection thresholds are lower, and repair timelines are clearer. From a technical standpoint, the industry is better at finding leaks than it was even a few years ago.

But monitoring is inherently reactive. It detects deterioration after it has begun. It does not restore structural integrity. It does not change the age profile of the system. It does not eliminate brittle joints or corrosion-prone materials.

Replacement is the only permanent risk reduction. And replacement is expensive, disruptive, and largely invisible unless cities require it to be discussed openly.


III. Why Annual Gas Utility Accountability Briefings Are Essential

Gas utilities operate under long-range capital replacement programs driven by regulatory approval, rate recovery, and internal prioritization models. Cities operate under land-use approvals, zoning changes, density increases, and redevelopment pressures that can change risk far faster than infrastructure plans adjust.

An annual gas utility accountability briefing is how those two worlds reconnect.

Not a promotional update. Not a general safety overview. But a technical, decision-grade briefing that allows city leadership to understand:

  • What materials remain in the ground
  • Where risk is concentrated
  • How fast legacy systems are being retired
  • Whether replacement is keeping pace with growth
  • Where development decisions may be increasing consequence

Without this, cities are effectively approving new intensity above ground while assuming adequacy below it.


IV. The Forgotten Segment: From the Meter to the Building

Most gas incidents that injure people do not originate in transmission pipelines or deep mains. They occur closest to occupied space—often in the short stretch between the gas meter and the building structure.

Legally, responsibility is clear:

  • The utility owns and maintains the system up to the meter.
  • The property owner owns everything downstream.

Assessment, however, is not.

Post-meter gas piping is frequently:

  • Older steel without modern corrosion protection
  • Stressed by foundation movement
  • Altered during remodels and additions
  • Poorly documented
  • Rarely inspected after initial construction

Utilities generally do not inspect customer-owned piping. Building departments see it only during permitted work. Fire departments respond after leaks are reported. Property owners often do not realize they own it.

This creates a true orphaned asset class: high-consequence infrastructure with no lifecycle oversight.


V. Responsibility Alone Is Not Safety

Cities often take comfort in the legal distinction: “That’s private property.” Legally, that is correct. Practically, it is insufficient.

Gas does not respect ownership boundaries. A failure inside a school, apartment building, restaurant, or nursing home becomes a public emergency immediately.

Risk governance does not require cities to assume liability. It requires them to ensure that someone is actually evaluating risk in places where failure would have severe consequences.


VI. Required Gas-Safety Evaluations for High-Occupancy Properties

This is the missing pillar of modern gas safety.

Just as elevators, fire suppression systems, and boilers undergo periodic inspection, gas piping systems in high-occupancy buildings should be subject to structured evaluation—regardless of whether the building is publicly or privately owned.

Facilities warranting mandatory evaluation include:

  • Schools (public and private)
  • Daycares
  • Nursing homes and assisted-living facilities
  • Hospitals and clinics
  • Large multifamily buildings
  • Assembly venues (churches, theaters, gyms)
  • Restaurants and food-service establishments
  • High-load commercial and industrial users

These are places where evacuation is difficult, ignition sources are common, and consequences are magnified.

A gas-safety evaluation should assess:

  • Condition and material of post-meter piping
  • Corrosion, support, and anchoring
  • Stress at building entry points
  • Evidence of undocumented modifications or abandoned lines
  • Accessibility and labeling of shutoff valves

These evaluations need not be frequent. They need to be periodic, triggered, and credible.


VII. Triggers That Make the System Work

Cities can implement this framework without blanket inspections by tying evaluations to specific events:

  • Change of occupancy or use
  • Major remodels or additions
  • Buildings reaching certain age thresholds when work is permitted
  • Repeated gas odor or leak responses
  • Sale or transfer of high-occupancy properties

This approach focuses effort where risk is most likely to have changed.


VIII. Public vs. Private: One Standard of Care

A gas explosion in a public school is not meaningfully different from one in a private daycare or restaurant. The victims do not care who owned the pipe.

A city that limits safety evaluation requirements to public buildings is acknowledging risk—but only partially. The standard should be risk-based, not ownership-based.


IX. Are We Better or Worse Off Than a Year Ago?

Technically, the answer is nuanced.

We are better off nationally in detection capability and regulatory clarity. Technology has improved. Survey frequency has increased. Reporting is stronger.

But many cities are likely worse off locally in exposure:

  • Buildings are older
  • Density is higher
  • Construction activity is heavier
  • Post-meter piping remains largely unassessed
  • High-occupancy facilities rely on outdated assumptions

So the honest answer is this:

We are better at finding problems—but not necessarily better at eliminating risk where people live, work, and gather.


X. Governance Is the Missing Link

Gas safety is no longer only an engineering problem. It is a governance problem.

Cities already regulate:

  • Land use and density
  • Building permits and occupancy
  • Business licensing
  • Emergency response coordination

Requiring annual gas utility accountability briefings and targeted gas-safety evaluations does not expand government arbitrarily. It closes a blind spot that modern urban conditions have exposed.


Conclusion: Asking the Right Question, Every Year

The most important question cities should ask annually is not:

“Did the utility comply with regulations?”

It is:

“Given our growth, our buildings, and our infrastructure, are we actually safer than we were last year?”

If city leaders cannot answer that clearly—above ground and below—it is not because the answer is unknowable.

It is because no one has required it to be known.


**Appendix A

Model Ordinance: Gas Infrastructure Accountability and High-Occupancy Safety Evaluations**

This model ordinance is designed to improve transparency, situational awareness, and public safety without transferring ownership, operational control, or liability from utilities or property owners to the City.


Section 1. Purpose and Findings

1.1 Purpose

The purpose of this ordinance is to:

  1. Improve transparency regarding the condition, monitoring, and replacement of gas infrastructure;
  2. Ensure that risks associated with aging gas systems are identified and reduced over time;
  3. Require periodic gas safety evaluations for high-occupancy buildings where consequences of failure are greatest;
  4. Strengthen coordination among gas utilities, property owners, and City emergency services; and
  5. Establish consistent, decision-grade information for City leadership.

1.2 Findings

The City Council finds that:

  1. Natural gas infrastructure is largely underground and not visible to the public.
  2. Portions of the gas system—including customer-owned piping—may age without systematic reassessment.
  3. Increased density, redevelopment, and construction activity elevate the consequences of gas failures.
  4. Existing regulatory frameworks do not provide city-specific visibility into system condition or replacement progress.
  5. Periodic reporting and targeted evaluation improve public safety without assuming utility or private ownership responsibilities.

Section 2. Annual Gas Utility Accountability Briefing

2.1 Requirement

Each gas utility operating within the City shall provide an Annual Gas Infrastructure Accountability Briefing to the City Council or its designated committee.

2.2 Scope

The briefing shall address, at a minimum:

  • Pipeline materials and age profile;
  • Replacement progress and future plans;
  • Leak detection, classification, and repair performance;
  • High-consequence areas and impacts of development;
  • Construction coordination and damage prevention;
  • Emergency response readiness and communication protocols.

2.3 Format and Standards

  • Briefings shall include written materials, maps, and data tables.
  • Metrics shall be presented in a year-over-year comparable format.
  • Information shall be technical, factual, and suitable for governance decision-making.

2.4 No Transfer of Liability

Nothing in this section shall be construed to transfer ownership, maintenance responsibility, or operational control of gas facilities to the City.


Section 3. High-Occupancy Gas Safety Evaluations

3.1 Covered Facilities

Gas safety evaluations are required for the following facilities, whether publicly or privately owned:

  • Schools (public and private)
  • Daycare facilities
  • Nursing homes and assisted-living facilities
  • Hospitals and medical clinics
  • Multifamily buildings exceeding [X] dwelling units
  • Assembly occupancies exceeding [X] persons
  • Restaurants and commercial food-service establishments
  • Other facilities designated by the Fire Marshal as high-consequence occupancies

3.2 Scope of Evaluation

Evaluations shall assess:

  • Condition and materials of post-meter gas piping
  • Corrosion potential and structural support
  • Stress at building entry points and foundations
  • Evidence of undocumented modifications or abandoned piping
  • Accessibility, labeling, and operation of shutoff valves

3.3 Qualified Evaluators

Evaluations shall be conducted by:

  • Licensed plumbers,
  • Licensed mechanical contractors, or
  • Professional engineers with gas system experience.

3.4 Triggers

Evaluations shall be required upon:

  • Change of occupancy or use;
  • Major remodels or building additions;
  • Buildings reaching [X] years of age when permits are issued;
  • Repeated gas odor complaints or leak responses;
  • Sale or transfer of covered properties, if adopted by the City.

Section 4. Documentation and Compliance

4.1 Certification

Property owners shall submit documentation certifying completion of required evaluations.

4.2 Corrective Action

Identified hazards shall be corrected within timeframes established by code officials.

4.3 Enforcement

Non-compliance may result in:

  • Withholding of permits or certificates of occupancy;
  • Temporary suspension of approvals;
  • Administrative penalties as authorized by law.

Section 5. Education and Coordination

The City shall:

  • Provide educational materials clarifying ownership and safety responsibilities;
  • Coordinate with gas utilities on public outreach;
  • Integrate findings into emergency response planning and training.


**Appendix B

Annual Gas Utility Accountability Briefing — Preparation Checklist**

This checklist ensures annual briefings are consistent, measurable, and focused on risk reduction rather than general compliance.


I. System Inventory & Condition

☐ Total pipeline miles within city limits (distribution vs. transmission)
☐ Pipeline miles by material type
☐ Pipeline miles by decade installed
☐ Location and extent of remaining legacy materials
☐ Identification of oldest segments still in service


II. Replacement Progress

☐ Miles replaced in the previous year (by material type)
☐ Five-year replacement plan with schedules
☐ Funded vs. unfunded replacement projects
☐ Year-over-year reduction in legacy materials
☐ Explanation of changes from prior plans


III. Leak Detection & Repair Performance

☐ Total leaks detected (normalized per mile)
☐ Leak classification breakdown
☐ Average and maximum repair times by class
☐ Repeat leak locations identified and mapped
☐ Root-cause analysis of recurring issues


IV. Monitoring Technology

☐ Detection technologies currently deployed
☐ Survey frequency achieved vs. required
☐ Use of advanced or emerging detection tools
☐ Known limitations of monitoring methods


V. High-Consequence Areas

☐ Definition and criteria for high-consequence zones
☐ Updated risk maps
☐ Impact of new development on risk profile
☐ Trunk lines serving rapidly densifying areas


VI. Construction & Damage Prevention

☐ Third-party damage incidents
☐ 811 ticket response performance
☐ High-risk project types identified
☐ Coordination procedures with City capital projects


VII. Emergency Response Readiness

☐ Incident response timelines
☐ Coordination with fire, police, and emergency management
☐ Date and scope of last joint exercise or drill
☐ Public communication and notification protocols


VIII. Customer-Owned (Post-Meter) Piping

☐ Incidents involving post-meter piping
☐ Common failure materials or conditions
☐ Customer education and outreach efforts
☐ Voluntary inspection or assistance programs


IX. Forward-Looking Risk Assessment

☐ Top unresolved risks
☐ Areas of greatest concern
☐ Commitments for the next 12 months
☐ Clear answer to:
“Are we safer than last year—and why?”


Closing Note

A briefing that cannot complete this checklist is not incomplete—it is revealing where risk remains unmanaged.

That visibility is the purpose of accountability.

An Update on Drone Uses in Texas Municipalities

A second collaboration between Lewis McLain & AI

From Tactical Tools to a Quiet Redefinition of First Response

A decade ago, a municipal drone program in Texas usually meant a small team, a locked cabinet, and a handful of specially trained officers who were called out when circumstances justified it. The drone was an accessory—useful, sometimes impressive, but peripheral to the ordinary rhythm of public safety.

That is no longer the case.

Across Texas, drones are being absorbed into the daily mechanics of emergency response. In a growing number of cities, they are no longer something an officer brings to a scene. They are something the city sends—often before the first patrol car, engine, or ambulance has cleared an intersection.

This shift is subtle, technical, and easily misunderstood. But it represents one of the most consequential changes in municipal public safety design in a generation.


The quiet shift from tools to systems

The defining change is not better cameras or longer flight times. It is program design.

Early drone programs were built around people: pilots, certifications, and equipment checklists. Today’s programs are built around systems—launch infrastructure, dispatch logic, real-time command centers, and policies that define when a drone may be used and, just as importantly, when it may not.

Cities like Arlington illustrate this evolution clearly. Arlington’s drones are not stored in trunks or deployed opportunistically. They launch from fixed docking stations, controlled through the city’s real-time operations center, and are sent to calls the way any other responder would be. The drone’s role is not to replace officers, but to give them something they rarely had before arrival: certainty.

Is someone actually inside the building? Is the suspect still there? Is the person lying in the roadway injured or already moving? These are small questions, but they shape everything that follows. In many cases, the presence of a drone overhead resolves a situation before physical contact ever occurs.

That pattern—early information reducing risk—is now being repeated, in different forms, across the state.


North Texas as an early laboratory

In North Texas, the progression from experimentation to normalization is especially visible.

Arlington’s program has become a reference point, not because it is flashy, but because it works. Drones are treated as routine assets, subject to policy, supervision, and after-action review. Their value is measured in response times and avoided escalations, not in flight hours.

Nearby, Dallas is navigating a more complex path. Dallas already operates one of the most active municipal drone programs in the state, but scale changes everything. Dense neighborhoods, layered airspace, multiple airports, and heightened civil-liberties scrutiny mean that Dallas cannot simply replicate what smaller cities have done.

Instead, Dallas appears to be doing something more consequential: deliberately embedding “Drone as First Responder” capability into its broader public-safety technology framework. Procurement language and public statements now describe drones verifying caller information while officers respond—a quiet but important acknowledgement that drones are becoming part of the dispatch process itself. If Dallas succeeds, it will establish a model for large, complex cities that have so far watched DFR from a distance.

Smaller cities have moved faster.

Prosper, for example, has embraced automation as a way to overcome limited staffing and long travel distances. Its program emphasizes speed—sub-two-minute arrivals made possible by automated docking stations that handle charging and readiness without human intervention. Prosper’s experience suggests that cities do not have to grow into DFR gradually; some can leap directly to system-level deployment.

Cities like Euless represent another important strand of adoption. Their programs are smaller, more cautious, and intentionally bounded. They launch drones to specific call types, collect experience, and adjust policy as they go. These cities matter because they demonstrate how DFR spreads laterally, city by city, through observation and imitation rather than mandates or statewide directives.


South Texas and the widening geography of DFR

DFR is not a North Texas phenomenon.

In the Rio Grande Valley, Edinburg has publicly embraced dispatch-driven drone response for crashes, crimes in progress, and search-and-rescue missions, including night operations using thermal imaging. In regions where heat, terrain, and distance complicate traditional response, the value of rapid aerial awareness is obvious.

Further west, Laredo has framed drones as part of a broader rapid-response network rather than a narrow policing tool. Discussions there extend beyond observation to include overdose response and medical support, pointing toward a future where drones do more than watch—they enable intervention while ground units close the gap.

Meanwhile, cities like Pearland have quietly done the hardest work of all: making DFR ordinary. Pearland’s early focus on remote operations and program governance is frequently cited by other cities, even when it draws little public attention. Its lesson is simple but powerful: the more boring a drone program becomes, the more likely it is to scale.


What 2026 will likely bring

By 2026, Texas municipalities will no longer debate drones in abstract terms. The conversation will shift to coverage, performance, and restraint.

City leaders will ask how much of their jurisdiction can be reached within two or three minutes, and what it costs to achieve that standard. DFR coverage maps will begin to resemble fire-station service areas, and response-time percentiles will replace anecdotal success stories.

Dispatch ownership will matter more than pilot skill. The most successful programs will be those in which drones are managed as part of the call-taking and response ecosystem, not as specialty assets waiting for permission. Pilots will become supervisors of systems, not just operators of aircraft.

At the same time, privacy will increasingly determine the pace of expansion. Cities that define limits early—what drones will never be used for, how long video is kept, who can access it—will move faster and with less friction. Those that delay these conversations will find themselves stalled, not by technology, but by public distrust.

Federal airspace rules will continue to separate tactical programs from scalable ones. Dense metro areas will demand more sophisticated solutions—automated docks, detect-and-avoid capabilities, and carefully designed flight corridors. The cities that solve these problems will not just have better drones; they will have better systems.

And perhaps most telling of all, drones will gradually fade from public conversation. When residents stop noticing them—when a drone overhead is no more remarkable than a patrol car passing by—the transformation will be complete.


A closing thought

Texas cities are not adopting drones because they are fashionable or futuristic. They are doing so because time matters, uncertainty creates risk, and early information saves lives—sometimes by prompting action, and sometimes by preventing it.

By 2026, the question will not be whether drones belong in municipal public safety. It will be why any city, given the chance to act earlier and safer, would choose not to.


Looking Ahead to 2026: When Drones Become Ordinary

By 2026, the most telling sign of success for municipal drone programs in Texas will not be innovation, expansion, or even capability. It will be normalcy.

The early years of public-safety drones were marked by novelty. A drone launch drew attention, generated headlines, and often triggered anxiety about surveillance or overreach. That phase is already fading. What is emerging in its place is quieter and far more consequential: drones becoming an assumed part of the response environment, much like radios, body cameras, or computer-aided dispatch systems once did.

The conversation will no longer revolve around whether a city has drones. Instead, it will focus on coverage and performance. City leaders will ask how quickly aerial eyes can reach different parts of the city, how often drones arrive before ground units, and what percentage of priority calls benefit from early visual confirmation. Response-time charts and service-area maps will replace anecdotes and demonstrations. In this sense, drones will stop being treated as technology and start being treated as infrastructure.

This shift will also clarify responsibility. The most mature programs will no longer center on individual pilots or specialty units. Ownership will move decisively toward dispatch and real-time operations centers. Drones will be launched because a call meets predefined criteria, not because someone happens to be available or enthusiastic. Pilots will increasingly function as system supervisors, ensuring compliance, safety, and continuity, rather than as hands-on operators for every flight.

At the same time, restraint will become just as important as reach. Cities that succeed will be those that articulate, early and clearly, what drones are not for. By 2026, residents will expect drone programs to come with explicit boundaries: no routine patrols, no generalized surveillance, no silent expansion of mission. Programs that fail to define those limits will find themselves stalled, regardless of how capable the technology may be.

Federal airspace rules and urban complexity will further separate casual programs from durable ones. Large cities will discover that scaling drones is less about buying more aircraft and more about solving coordination problems—airspace, redundancy, automation, and integration with other systems. The cities that work through those constraints will not just fly more often; they will fly predictably and defensibly.

And then, gradually, the attention will drift away.

When a drone arriving overhead is no longer remarkable—when it is simply understood as one of the first tools a city sends to make sense of an uncertain situation—the transition will be complete. The public will not notice drones because they will no longer symbolize change. They will symbolize continuity.

That is the destination Texas municipalities are approaching: not a future where drones dominate public safety, but one where they quietly support it—reducing uncertainty, improving judgment, and often preventing escalation precisely because they arrive early and ask the simplest question first: What is really happening here?

By 2026, the most advanced drone programs in Texas will not feel futuristic at all. They will feel inevitable.

What Question Are We Actually Answering?

A collaboration between Lewis McLain & AI

Why Good Analysis Begins Long Before Data — and Why Asking Better Questions Is a Skill That Must Be Practiced


I. The Invisible Starting Line

Every serious analysis begins with a question.
Almost every serious failure begins with the wrong one.

This is uncomfortable because it means that many errors are not technical. They are not caused by bad data, weak models, insufficient funding, or lack of expertise. They occur before any of that—at the moment a question is framed, accepted, and allowed to go unchallenged.

Questions are often inherited rather than chosen. They arrive embedded in headlines, legislation, grant applications, consulting scopes, software templates, or political urgency. By the time anyone pauses to ask whether the question itself is sound, the machinery is already moving.

Once that happens, better data does not fix the problem.
It accelerates it.

Precision is not clarity. A precisely answered wrong question produces results that feel authoritative while being fundamentally misleading. This is why analysis so often fails quietly and confidently.


II. The Four Types of Questions (And Why Only One Sustains Analysis)

Not all questions do the same kind of work. Most confusion in public debate and institutional decision-making comes from treating very different questions as if they were interchangeable.

1. Descriptive Questions

What is happening?

These establish facts, counts, and trends. They are necessary, but inert. Description alone does not explain change, causation, or constraint. Mistaking description for understanding is one of the most common analytical errors.

2. Attributional Questions

Who is responsible?

These arrive early and loudly. They satisfy emotional and political needs, but they tend to collapse complex systems into villains and heroes. Attribution feels like insight, but it usually precedes understanding.

3. Prescriptive Questions

What should we do?

These feel decisive and productive. They are also dangerous when asked prematurely. Prescriptions lock systems into action paths that may be impossible to reverse, even if the diagnosis was wrong.

4. Analytical Questions

What changed, relative to what, over what time horizon, and under which constraints?

These are the least intuitive and least rewarded questions, yet they are the only ones that scale. They slow the conversation down, resist moral shortcuts, and force structure onto complexity.

Most debates skip directly from description to prescription. Analysis happens, if at all, in the margins.


III. Time Horizons: The Quiet Distorter

Every question implies a time frame, whether stated or not. When it goes unstated, it is almost always too short.

Systems behave differently over one year than over five, and differently again over a generation. Short horizons hide maturation effects, suppress lagged consequences, and reward surface solutions. Long horizons expose tradeoffs, reveal inevitabilities, and demand humility.

When someone asks, “Why is this happening now?” without clarifying whether “now” means this quarter, this decade, or this lifecycle stage, the answer will be confident and wrong.

A reliable analytical rule is simple:
If the time horizon is unstated, it is probably distorting the conclusion.


IV. Baselines: The Question Nobody Wants to Ask

“Compared to what?” is the most expensive sentence in analysis.

Baselines are almost always chosen quietly and defended rarely. Yet they determine whether something appears as growth or stagnation, crisis or normal variation, success or failure.

Common baseline errors include:

  • Comparing growing systems to static ones
  • Comparing interventions to “doing nothing,” which never exists
  • Comparing today to yesterday instead of to trend or lifecycle stage

Without a baseline, change has no meaning. Without an agreed-upon baseline, debate becomes endless recalibration rather than understanding.

The refusal—or failure—to ask baseline questions is not a technical oversight. It is often a psychological one. Baselines make certain narratives harder to maintain.


V. The Substitution Problem

Systems do not eliminate pressure. They redirect it.

Every policy, reform, or intervention substitutes one cost, risk, or burden for another. The analytical failure is not unintended consequences; it is unacknowledged substitution.

When analysis celebrates a solution without tracing where pressure moved, it is incomplete by definition. The question “What problem did we solve?” must be followed immediately by “Where did the pressure go?”

Ignoring substitution allows success to be declared in one domain while strain accumulates invisibly in another.


VI. Metrics Are Mirrors, Not Truth

Metrics are indispensable—and dangerous.

They capture what is easy to measure, not necessarily what matters most. They reward visibility, not durability. They improve responsiveness but often degrade resilience.

Measurement should provoke questions, not end them. When metrics become substitutes for judgment, they stop illuminating reality and begin reflecting institutional incentives back at themselves.

What improves on paper may be decaying in practice. The analyst’s task is not to reject metrics, but to interrogate them relentlessly.


VII. The Discipline of the Second Question

Most people ask one good question. Then they stop.

The first question usually reveals curiosity. The second reveals discipline.

  • First question: What happened?
  • Second question: Relative to what expectation?
  • Third question: Why now and not earlier?
  • Fourth question: At whose expense did this improve?
  • Fifth question: What constraint was binding?

Most analytical errors occur between questions one and two. The pause required to ask the second question feels unproductive, even obstructive. In reality, it is where understanding begins.


VIII. Asking Good Questions Is a Skill — and It Must Be Practiced

The ability to ask good questions is not innate. It is trained.

It requires resisting the urge to sound smart quickly. It requires tolerating ambiguity longer than is comfortable. It requires being willing to appear slow, cautious, or even naïve in environments that reward speed and certainty.

Like any discipline, it improves through repetition:

  • Reviewing past analyses and identifying where the wrong question was asked
  • Practicing reframing problems in multiple ways before selecting one
  • Studying failures not for answers, but for misframed questions
  • Learning to sit with incomplete understanding without rushing to closure

Good questioners are not passive. They are rigorous. They know that the hardest work happens before the first chart, model, or recommendation.


IX. What Your Questions Reveal About You

Questions are diagnostic. They reveal far more about the questioner than about the subject being questioned.

They reveal:

  • Whether someone is seeking understanding or validation
  • Whether they tolerate uncertainty or rush to control
  • Whether they think in systems or in narratives
  • Whether they are curious about limits or allergic to them

A person who habitually asks attributional questions before analytical ones is revealing impatience with complexity. A person who never asks baseline or time-horizon questions is revealing comfort with surface explanations.

In this sense, questions are a form of moral autobiography. Over time, they expose whether a person is oriented toward truth, persuasion, blame, or reassurance.


X. Analysis as Responsibility

Analysis is not neutral. It shapes how resources are allocated, how authority is exercised, and how force—legal, financial, or moral—is applied.

Bad questions do not merely mislead; they coerce. They narrow the range of permissible answers and foreclose alternatives before they are considered.

The responsibility of the analyst is not certainty. It is honesty about limits, tradeoffs, and unknowns. Asking better questions is not intellectual vanity; it is an ethical act.


Conclusion

The most dangerous answers are not the wrong ones.
They are the ones that emerge from unexamined questions.

Before asking what the data says, before debating solutions, before declaring success or failure, the analyst owes one discipline above all others:

Stop.
Name the question.
Interrogate it.
And be willing to change it.

That pause—unrewarded, uncomfortable, and often invisible—is where real thinking begins.

The Modern Financial & General Analyst’s Core Skill Set

Excel, SQL Server, Power BI — With AI Doing the Heavy Lifting

A collaboration between Lewis McLain & AI

Introduction: The Skill That Now Matters Most

The most important analytical skill today is no longer memorizing syntax, mastering a single tool, or becoming a narrow specialist.

The must-have skill is knowing how to direct intelligence.

In practice, that means combining:

  • Excel for thinking, modeling, and scenarios
  • SQL Server for structure, scale, and truth
  • Power BI for communication and decision-making
  • AI as the teacher, coder, documenter, and debugger

This is not about replacing people with AI.
It is about finally separating what humans are best at from what machines are best at—and letting each do their job.


1. Stop Explaining. Start Supplying.

One of the biggest mistakes people make with AI is trying to explain complex systems to it in conversation.

That is backward.

The Better Approach

If your organization has:

  • an 80-page budget manual
  • a cost allocation policy
  • a grant compliance guide
  • a financial procedures handbook
  • even the City Charter

Do not summarize it for AI.
Give AI the document.

Then say:

“Read this entire manual. Summarize it back to me in 3–5 pages so I can confirm your understanding.”

This is where AI excels.

AI is extraordinarily good at:

  • absorbing long, dense documents
  • identifying structure and hierarchy
  • extracting rules, exceptions, and dependencies
  • restating complex material in plain language

Once AI demonstrates understanding, you can say:

“Assume this manual governs how we budget. Based on that understanding, design a new feature that…”

From that point on, AI is no longer guessing.
It is operating within your rules.

This is the fundamental shift:

  • Humans provide authoritative context
  • AI provides execution, extension, and suggested next steps

You will see this principle repeated throughout this post and the appendices—because everything else builds on it.


2. The Stack Still Matters (But for Different Reasons Now)

AI does not eliminate the need for Excel, SQL Server, or Power BI.
It makes them far more powerful—and far more accessible.


Excel — The Thinking and Scenario Environment

Excel remains the fastest way to:

  • test ideas
  • explore “what if” questions
  • model scenarios
  • communicate assumptions clearly

What has changed is not Excel—it is the burden placed on the human.

You no longer need to:

  • remember every formula
  • write VBA macros from scratch
  • search forums for error messages

AI already understands:

  • Excel formulas
  • Power Query
  • VBA (Visual Basic for Applications, Excel’s automation language)

You can say:

“Write an Excel model with inputs, calculations, and outputs for this scenario.”

AI will:

  • generate the formulas
  • structure the workbook cleanly
  • comment the logic
  • explain how it works

If something breaks:

  • AI reads the error message
  • explains why it occurred
  • fixes the formula or macro

Excel becomes what it was always meant to be:
a thinking space, not a memory test.


SQL Server — The System of Record and Truth

SQL Server is where analysis becomes reliable, repeatable, and scalable.

It holds:

  • historical data (millions of records are routine)
  • structured dimensions
  • consistent definitions
  • auditable transformations

Here is the shift AI enables:

You do not need to be a syntax expert.

SQL (Structured Query Language) is something AI already understands deeply.

You can say:

“Create a SQL view that allocates indirect costs by service hours. Include validation queries.”

AI will:

  • write the SQL
  • optimize joins
  • add comments
  • generate test queries
  • flag edge cases
  • produce clear documentation

AI can also interpret SQL Server error messages, explain them in plain English, and rewrite the code correctly.

This removes one of the biggest barriers between finance and data systems.

SQL stops being “IT-only” and becomes a shared analytical language, with AI translating analytical intent into executable code.


Power BI — Where Decisions Happen

Power BI is the communication layer: dashboards, trends, drilldowns, and monitoring.

It relies on DAX (Data Analysis Expressions), the calculation language used by Power BI.

Here is the key reassurance:

AI already understands DAX extremely well.

DAX is:

  • rule-based
  • pattern-driven
  • language-like

This makes it ideal for AI assistance.

You do not need to memorize DAX syntax.
You need to describe what you want.

For example:

“I want year-over-year change, rolling 12-month averages, and per-capita measures that respect slicers.”

AI can:

  • write the measures
  • explain filter context
  • fix common mistakes
  • refactor slow logic
  • document what each measure does

Power BI becomes less about struggling with formulas and more about designing the right questions.


3. AI as the Documentation Engine (Quietly Transformational)

Documentation is where most analytical systems decay.

  • Excel models with no explanation
  • SQL views nobody understands
  • Macros written years ago by someone who left
  • Reports that “work” but cannot be trusted

AI changes this completely.

SQL Documentation

AI can:

  • add inline comments to SQL queries
  • write plain-English descriptions of each view
  • explain table relationships
  • generate data dictionaries automatically

You can say:

“Document this SQL view so a new analyst understands it.”

And receive:

  • a clear narrative
  • assumptions spelled out
  • warnings about common mistakes

Excel & Macro Documentation

AI can:

  • explain what each worksheet does
  • document VBA macros line-by-line
  • generate user instructions
  • rewrite messy macros into cleaner, documented code

Recently, I had a powerful but stodgy Excel workbook with over 1.4 million formulas.
AI read the entire file, explained the internal logic accurately, and rewrote the system in SQL with a few hundred well-documented lines—producing identical results.

Documentation stops being an afterthought.
It becomes cheap, fast, and automatic.


4. AI as Debugger and Interpreter

One of AI’s most underrated strengths is error interpretation.

AI excels at:

  • reading cryptic error messages
  • identifying likely causes
  • suggesting fixes
  • explaining failures in plain language

You can copy-paste an error message without comment and say:

“Explain this error and fix the code.”

This applies to:

  • Excel formulas
  • VBA macros
  • SQL queries
  • Power BI refresh errors
  • DAX logic problems

Hours of frustration collapse into minutes.


5. What Humans Still Must Do (And Always Will)

AI is powerful—but it is not responsible for outcomes.

Humans must still:

  • define what words mean (“cost,” “revenue,” “allocation”)
  • set policy boundaries
  • decide what is reasonable
  • validate results
  • interpret implications
  • make decisions

The human role becomes:

  • director
  • creator
  • editor
  • judge
  • translator

AI does not replace judgment.
It amplifies disciplined judgment.


6. Why This Matters Across the Organization

For Managers

  • Faster insight
  • Clearer explanations
  • Fewer “mystery numbers”
  • Greater confidence in decisions

For Finance Professionals

  • Less time fighting tools
  • More time on policy, tradeoffs, and risk
  • Stronger documentation and audit readiness

For IT Professionals

  • Cleaner specifications
  • Fewer misunderstandings
  • Better separation of logic and presentation
  • More maintainable systems

This is not a turf shift.
It is a clarity shift.


7. The Real Skill Shift

The modern analyst does not need to:

  • memorize every function
  • master every syntax rule
  • become a full-time programmer

The modern analyst must:

  • ask clear questions
  • supply authoritative context
  • define constraints
  • validate outputs
  • communicate meaning

AI handles the rest.


Conclusion: Intelligence, Directed

Excel, SQL Server, and Power BI remain the backbone of serious analysis—not because they are trendy, but because they mirror how thinking, systems, and decisions actually work.

AI changes how we use them:

  • it reads the manuals
  • writes the code
  • documents the logic
  • fixes the errors
  • explains the results

Humans provide direction.
AI provides execution.

Those who learn to work this way will not just be more efficient—they will be more credible, more influential, and more future-proof.


Appendix A

A Practical AI Prompt Library for Finance, Government, and Analytical Professionals

This appendix is meant to be used, not admired.

These prompts reflect how professionals actually work: with rules, constraints, audits, deadlines, and political consequences.

You are not asking AI to “be smart.”
You are directing intelligence.


A.1 Foundational “Read & Confirm” Prompts (Critical)

Use these first. Always.

Prompt

“Read the attached document in full. Treat it as authoritative. Summarize the structure, rules, definitions, exceptions, and dependencies. Do not add assumptions. I will confirm your understanding.”

Why this matters

  • Eliminates guessing
  • Aligns AI with your institutional reality
  • Prevents hallucinated rules

A.2 Excel Modeling Prompts

Scenario Model

“Design an Excel workbook with Inputs, Calculations, and Outputs tabs. Use named ranges. Include scenario toggles and validation checks that confirm totals tie out.”

Formula Debugging

“This Excel formula returns an error. Explain why, fix it, and rewrite it in a clearer form.”

Macro Creation

“Write a VBA macro that refreshes all data connections, recalculates, logs a timestamp, and alerts the user if validation checks fail. Comment every section.”

Documentation

“Explain this Excel workbook as if onboarding a new analyst. Describe what each worksheet does and how inputs flow to outputs.”


A.3 SQL Server Prompts

View Creation

“Create a SQL view that produces monthly totals by City and Department. Grain must be City-Month-Department. Exclude void transactions. Add comments and validation queries.”

Performance Refactor

“Refactor this SQL query for performance without changing results. Explain what you changed and why.”

Error Interpretation

“Here is a SQL Server error message. Explain it in plain English and fix the query.”

Documentation

“Document this SQL schema so a new analyst understands table purpose, keys, and relationships.”


A.4 Power BI / DAX Prompts

(DAX = Data Analysis Expressions, the calculation language used by Power BI — a language AI already understands deeply.)

Measure Creation

“Create DAX measures for Total Cost, Cost per Capita, Year-over-Year Change, and Rolling 12-Month Average. Explain filter context for each.”

Debugging

“This DAX measure returns incorrect results when filtered. Explain why and correct it.”

Model Review

“Review this Power BI data model and identify risks: ambiguous relationships, missing dimensions, or inconsistent grain.”


A.5 Validation & Audit Prompts

Validation Suite

“Create validation queries that confirm totals tie to source systems and flag variances greater than 0.1%.”

Audit Explanation

“Explain how this model produces its final numbers in language suitable for auditors.”


A.6 Training & Handoff Prompts

Training Guide

“Create a training guide for an internal analyst explaining how to refresh, validate, and extend this model safely.”

Institutional Memory

“Write a ‘how this system thinks’ document explaining design philosophy, assumptions, and known limitations.”


Key Principle

Good prompts don’t ask for brilliance.
They provide clarity.


Appendix B

How to Validate AI-Generated Analysis Without Becoming Paranoid

AI does not eliminate validation.
It raises the bar for it.

The danger is not trusting AI too much.
The danger is trusting anything without discipline.


B.1 The Rule of Independent Confirmation

Every important number must:

  • tie to a known source, or
  • be independently recomputable

If it cannot be independently confirmed, it is not final.


B.2 Validation Layers (Use All of Them)

Layer 1 — Structural Validation

  • Correct grain (monthly vs annual)
  • No duplicate keys
  • Expected row counts

Layer 2 — Arithmetic Validation

  • Subtotals equal totals
  • Allocations sum to 100%
  • No unexplained residuals

Layer 3 — Reconciliation

  • Ties to GL, ACFR, payroll, ridership, etc.
  • Same totals across tools (Excel, SQL, Power BI)

Layer 4 — Reasonableness Tests

  • Per-capita values plausible?
  • Sudden jumps explainable?
  • Trends consistent with known events?

AI can help generate all four layers, but humans must decide what “reasonable” means.


B.3 The “Explain It Back” Test

One of the strongest validation techniques:

“Explain how this number was produced step by step.”

If the explanation:

  • is coherent
  • references known rules
  • matches expectations

You’re on solid ground.

If not, stop.


B.4 Change Detection

Always compare:

  • this month vs last month
  • current version vs prior version

Ask AI:

“Identify and explain every material change between these two outputs.”

This catches silent errors early.


B.5 What Validation Is Not

Validation is not:

  • blind trust
  • endless skepticism
  • redoing everything manually

Validation is structured confidence-building.


B.6 Why AI Helps Validation (Instead of Weakening It)

AI:

  • generates test queries quickly
  • explains failures clearly
  • documents expected behavior
  • flags anomalies humans may miss

AI doesn’t reduce rigor.
It makes rigor affordable.


Appendix C

What Managers Should Ask For — and What They Should Stop Asking For

This appendix is for leaders.

Good management questions produce good systems.
Bad questions produce busywork.


C.1 What Managers Should Ask For

“Show me the assumptions.”

If assumptions aren’t visible, the output isn’t trustworthy.


“How does this tie to official numbers?”

Every serious analysis must reconcile to something authoritative.


“What would change this conclusion?”

Good models reveal sensitivities, not just answers.


“How will this update next month?”

If refresh is manual or unclear, the model is fragile.


“Who can maintain this if you’re gone?”

This forces documentation and institutional ownership.


C.2 What Managers Should Stop Asking For

❌ “Just give me the number.”

Numbers without context are liabilities.


❌ “Can you do this quickly?”

Speed without clarity creates rework and mistrust.


❌ “Why can’t this be done in Excel?”

Excel is powerful—but it is not a system of record.


❌ “Can’t AI just do this automatically?”

AI accelerates work within rules.
It does not invent governance.


C.3 The Best Managerial Question of All

“How confident should I be in this, and why?”

That question invites:

  • validation
  • explanation
  • humility
  • trust

It turns analysis into leadership support instead of technical theater.


Appendix D

Job Description: The Modern Analyst (0–3 Years Experience)

This job description reflects what an effective, durable analyst looks like today — not a unicorn, not a senior architect, and not a narrow technician.

This role assumes the analyst will work in an environment that uses Excel, SQL Server, Power BI, and AI tools as part of normal operations.


Position Title

Data / Financial / Business Analyst
(Title may vary by organization)


Experience Level

  • Entry-level to 3 years of professional experience
  • Recent graduates encouraged to apply

Role Purpose

The Modern Analyst supports decision-making by:

  • transforming raw data into reliable information,
  • building repeatable analytical workflows,
  • documenting logic clearly,
  • and communicating results in ways leaders can trust.

This role is not about memorizing syntax or becoming a single-tool expert.
It is about directing analytical tools — including AI — with clarity, discipline, and judgment.


Core Responsibilities

1. Analytical Thinking & Problem Framing

  • Translate business questions into analytical tasks
  • Clarify assumptions, definitions, and scope before analysis begins
  • Identify what data is needed and where it comes from
  • Ask follow-up questions when requirements are ambiguous

2. Excel Modeling & Scenario Analysis

  • Build and maintain Excel models using:
    • structured layouts (inputs → calculations → outputs)
    • clear formulas and named ranges
    • validation checks and reconciliation totals
  • Use Excel for:
    • exploratory analysis
    • scenario testing
    • sensitivity analysis
  • Leverage AI tools to:
    • generate formulas
    • debug errors
    • document models

3. SQL Server Data Work

  • Query and analyze data stored in SQL Server
  • Create and maintain:
    • views
    • aggregation queries
    • validation checks
  • Understand concepts such as:
    • joins
    • grouping
    • grain (row-level meaning)
  • Use AI assistance to:
    • write SQL code
    • optimize queries
    • interpret error messages
    • document logic clearly

(Deep database administration is not required.)


4. Power BI Reporting & Analysis

  • Build and maintain Power BI reports and dashboards
  • Use existing semantic models and measures
  • Create new measures using DAX (Data Analysis Expressions) with AI guidance
  • Ensure reports:
    • align with defined metrics
    • update reliably
    • are understandable to non-technical users

5. Documentation & Knowledge Transfer

  • Document:
    • Excel models
    • SQL queries
    • Power BI reports
  • Write explanations that allow another analyst to:
    • understand the logic
    • reproduce results
    • maintain the system
  • Use AI to accelerate documentation while ensuring accuracy

6. Validation & Quality Control

  • Reconcile outputs to authoritative sources
  • Identify anomalies and unexplained changes
  • Use validation checks rather than assumptions
  • Explain confidence levels and limitations clearly

7. Collaboration & Communication

  • Work with:
    • finance
    • operations
    • IT
    • management
  • Present findings clearly in plain language
  • Respond constructively to questions and challenges
  • Accept feedback and revise analysis as needed

Required Skills & Competencies

Analytical & Professional Skills

  • Curiosity and skepticism
  • Attention to detail
  • Comfort asking clarifying questions
  • Willingness to document work
  • Ability to explain complex ideas simply

Technical Skills (Baseline)

  • Excel (intermediate level or higher)
  • Basic SQL (SELECT, JOIN, GROUP BY)
  • Familiarity with Power BI or similar BI tools
  • Comfort using AI tools for coding, explanation, and documentation

Candidates are not expected to know everything on day one.


Preferred Qualifications

  • Degree in:
    • Finance
    • Accounting
    • Economics
    • Data Analytics
    • Information Systems
    • Engineering
    • Public Administration
  • Internship or project experience involving data analysis
  • Exposure to:
    • budgeting
    • forecasting
    • cost allocation
    • operational metrics

What Success Looks Like (First 12–18 Months)

A successful analyst in this role will be able to:

  • independently build and explain Excel models
  • write and validate SQL queries with AI assistance
  • maintain Power BI reports without breaking definitions
  • document their work clearly
  • flag issues early rather than hiding uncertainty
  • earn trust by being transparent and disciplined

What This Role Is Not

This role is not:

  • a pure programmer role
  • a dashboard-only role
  • a “press the button” reporting job
  • a role that values speed over accuracy

Why This Role Matters

Organizations increasingly fail not because they lack data, but because:

  • logic is undocumented
  • assumptions are hidden
  • systems are fragile
  • knowledge walks out the door

This role exists to prevent that.


Closing Note to Candidates

You do not need to be an expert in every tool.

You do need to:

  • think clearly,
  • communicate honestly,
  • learn continuously,
  • and use AI responsibly.

If you can do that, the tools will follow.


Appendix E

Interview Questions a Strong Analyst Should Ask

(And Why the Answers Matter)

This appendix is written for candidates — especially early-career analysts — who want to succeed, grow, and contribute meaningfully.

These are not technical questions.
They are questions about whether the environment supports good analytical work.

A thoughtful organization will welcome these questions.
An uncomfortable response is itself an answer.


1. Will I Have Timely Access to the Data I’m Expected to Analyze?

Why this matters

Analysts fail more often from lack of access than lack of ability.

If key datasets (such as utility billing, payroll, permitting, or ridership data) require long approval chains, partial access, or repeated manual requests, analysis stalls. Long delays force analysts to restart work cold, which is inefficient and demoralizing.

A healthy environment has:

  • clear data access rules,
  • predictable turnaround times,
  • and documented data sources.

2. Will I Be Able to Work in Focused Blocks of Time?

Why this matters

Analytical work requires concentration and continuity.

If an analyst’s day is fragmented by:

  • constant meetings,
  • urgent ad-hoc requests,
  • unrelated administrative tasks,

then even talented analysts struggle to make progress. Repeated interruptions over days or weeks force constant re-learning and increase error risk.

Strong teams protect at least some uninterrupted time for deep work.


3. How Often Are Priorities Changed Once Work Has Started?

Why this matters

Changing priorities is normal. Constant resets are not.

Frequent shifts without closure:

  • waste effort,
  • erode confidence,
  • and prevent analysts from seeing work through to completion.

A good environment allows:

  • exploratory work,
  • followed by stabilization,
  • followed by delivery.

Analysts grow fastest when they can complete full analytical cycles.


4. Will I Be Asked to Do Significant Work Outside the Role You’re Hiring Me For?

Why this matters

Early-career analysts often fail because they are overloaded with tasks unrelated to analysis:

  • ad-hoc administrative work,
  • manual data entry,
  • report formatting unrelated to insights,
  • acting as an informal IT support desk.

This dilutes skill development and leads to frustration.

A strong role respects analytical focus while allowing reasonable cross-functional exposure.


5. Where Will This Role Sit Organizationally?

Why this matters

Analysts thrive when they are close to:

  • decision-makers,
  • subject-matter experts,
  • and the business context.

Being housed in IT can be appropriate in some organizations, but analysts often succeed best when:

  • they are embedded in finance, operations, or planning,
  • with strong, cooperative support from IT, not ownership by IT.

Clear role placement reduces confusion about expectations and priorities.


6. What Kind of Support Will I Have from IT?

Why this matters

Analysts do not need IT to do their work for them — but they do need:

  • help with access,
  • guidance on standards,
  • and assistance when systems issues arise.

A healthy environment has:

  • defined IT support pathways,
  • mutual respect between analysts and IT,
  • and shared goals around data quality and security.

Adversarial or unclear relationships slow everyone down.


7. Will I Be Encouraged to Document My Work — and Given Time to Do So?

Why this matters

Documentation is often praised but rarely protected.

If analysts are rewarded only for speed and output, documentation becomes the first casualty. This creates fragile systems and makes handoffs painful.

Strong organizations:

  • value documentation,
  • allow time for it,
  • and recognize it as part of the job, not overhead.

8. How Will Success Be Measured in the First Year?

Why this matters

Vague success criteria create anxiety and misalignment.

A healthy answer includes:

  • skill development,
  • reliability,
  • learning the organization’s data,
  • and increasing independence over time.

Early-career analysts need space to learn without fear of being labeled “slow.”


9. What Happens When Data or Assumptions Are Unclear?

Why this matters

No dataset is perfect.

Analysts succeed when:

  • questions are welcomed,
  • assumptions are discussed openly,
  • and uncertainty is handled professionally.

An environment that discourages questions or punishes transparency leads to quiet errors and loss of trust.


10. Will I Be Allowed — and Encouraged — to Use Modern Tools Responsibly?

Why this matters

Analysts today learn and work using tools like:

  • Excel,
  • SQL,
  • Power BI,
  • and AI-assisted analysis.

If these tools are discouraged, restricted without explanation, or treated with suspicion, analysts are forced into inefficient workflows. In many cases, the latest versions with added features can prove better productivity. Is the organization more than 1-2 years behind in updating at the present time? What are the views of key players about AI?

Strong organizations focus on:

  • governance,
  • validation,
  • and responsible use — not blanket prohibition.

11. How Are Analytical Mistakes Handled?

Why this matters

Mistakes happen — especially while learning.

The question is whether the culture responds with:

  • learning and correction, or
  • blame and fear.

Analysts grow fastest in environments where:

  • mistakes are surfaced early,
  • corrected openly,
  • and used to improve systems.

12. Who Will I Learn From?

Why this matters

Early-career analysts need:

  • examples,
  • feedback,
  • and mentorship.

Even informal guidance matters.

A thoughtful answer shows the organization understands that analysts are developed, not simply hired.


Closing Note to Candidates

These questions are not confrontational.
They are professional.

Organizations that welcome them are more likely to:

  • retain talent,
  • produce reliable analysis,
  • and build durable systems.

If an organization cannot answer these questions clearly, it does not mean it is a bad place — but it may not yet be a good place for an analyst to thrive.


Appendix F

A Necessary Truce: IT Control, Analyst Access, and the Role of Sandboxes

One of the most common — and understandable — tensions in modern organizations sits at the boundary between IT and analytical staff.

It usually sounds like this:

“We can’t let anyone outside IT touch live databases.”

On this point, IT is absolutely right.

Production systems exist to:

  • run payroll,
  • bill customers,
  • issue checks,
  • post transactions,
  • and protect sensitive information.

They must be:

  • stable,
  • secure,
  • auditable,
  • and minimally disturbed.

No serious analyst disputes this.

But here is the equally important follow-up question — one that often goes unspoken:

If analysts cannot access live systems, do they have access to a safe, current analytical environment instead?


Production Is Not the Same Thing as Analysis

The core misunderstanding is not about permission.
It is about purpose.

  • Production systems are built to execute transactions correctly.
  • Analytical systems are built to understand what happened.

These are different jobs, and they should live in different places.

IT departments already understand this distinction in principle. The question is whether it has been implemented in practice.


The Case for Sandboxes and Analytical Mirrors

A well-run organization does not give analysts access to live transactional tables.

Instead, it provides:

  • read-only mirrors
  • overnight refreshes at a minimum
  • restricted, de-identified datasets
  • clearly defined analytical schemas

This is not radical.
It is standard practice in mature organizations.

What a Sandbox Actually Is

A sandbox is:

  • a copy of production data,
  • refreshed on a schedule (often nightly),
  • isolated from operational systems,
  • and safe to explore without risk.

Analysts can:

  • query freely,
  • build models,
  • validate logic,
  • and document findings

…without the possibility of disrupting operations.


A Practical Example: Payroll and Personnel Data

Payroll is often cited as the most sensitive system — and rightly so.

But here is the practical reality:

Most analytical work does not require:

  • Social Security numbers
  • bank account details
  • wage garnishments
  • benefit elections
  • direct deposit instructions

What analysts do need are things like:

  • position counts
  • departments
  • job classifications
  • pay grades
  • hours worked
  • overtime
  • trends over time

A Payroll / Personnel sandbox can be created that:

  • mirrors the real payroll tables,
  • strips or masks protected fields,
  • replaces SSNs with surrogate keys,
  • removes fields irrelevant to analysis,
  • refreshes nightly from production

This allows analysts to answer questions such as:

  • How is staffing changing?
  • Where is overtime increasing?
  • What are vacancy trends?
  • How do personnel costs vary by department or function?

All without exposing sensitive personal data.

This is not a compromise of security.
It is an application of data minimization, a core security principle.


Why This Matters More Than IT Realizes

When analysts lack access to safe, current analytical data, several predictable failures occur:

  • Analysts rely on stale exports
  • Logic is rebuilt repeatedly from scratch
  • Results drift from official numbers
  • Trust erodes between departments
  • Decision-makers get inconsistent answers

Ironically, over-restriction often increases risk, because:

  • people copy data locally,
  • spreadsheets proliferate,
  • and controls disappear entirely.

A well-designed sandbox reduces risk by centralizing access under governance.


What IT Is Right to Insist On

IT is correct to insist on:

  • no write access
  • no direct production access
  • strong role-based security
  • auditing and logging
  • clear ownership of schemas
  • documented refresh processes

None of that is negotiable.

But those safeguards are fully compatible with analyst access — if access is provided in the right environment.


What Analysts Are Reasonably Asking For

Analysts are not asking to:

  • run UPDATE statements on live tables
  • bypass security controls
  • access protected personal data
  • manage infrastructure

They are asking for:

  • timely access to analytical copies of data
  • predictable refresh schedules
  • stable schemas
  • and the ability to do their job without constant resets

That is a governance problem, not a personnel problem.


The Ideal Operating Model

In a healthy organization:

  • IT owns production systems
  • IT builds and governs analytical mirrors
  • Analysts work in sandboxes
  • Finance and operations define meaning
  • Validation ties analysis back to production totals
  • Everyone wins

This model:

  • protects systems,
  • protects data,
  • supports analysis,
  • and builds trust.

Why This Belongs in This Series

Earlier appendices described:

  • the skills of the modern analyst,
  • the questions analysts should ask,
  • and the environments that cause analysts to fail or succeed.

This appendix addresses a core environmental reality:

Analysts cannot succeed without access — and access does not require risk.

The solution is not fewer analysts or tighter gates.
The solution is better separation between production and analysis.


A Final Word to IT, Finance, and Leadership

This is not an argument against IT control.

It is an argument for IT leadership.

The most effective IT departments are not those that say “no” most often —
they are the ones that say:

“Here is the safe way to do this.”

Sandboxes, data warehouses, and analytical mirrors are not luxuries.
They are the infrastructure that allows modern organizations to think clearly without breaking what already works.

Closing Note on the Appendices

These appendices complete the framework:

  • The main essay explains the stack
  • The follow-up explains how to direct AI
  • These appendices make it operational

Together, they describe not just how to use AI—but how to use it responsibly, professionally, and durably.

What Every Student Should Learn From Economics — The Missing Foundation for Adult Life

A collaboration between Lewis McLain & AI (3 of 4 in a Series)

If I struggled with literature when I was young, and if I misunderstood the purpose of history, then economics was the third great gap in my early education. I went through high school without any real understanding of how money works, how governments raise and spend it, how markets respond to incentives, or how personal financial decisions compound over time. I did not grasp the forces shaping wages, prices, interest rates, trade, taxation, inflation, or debt. I did get a good dose in college.

Looking back, I can see clearly:
Economics is the core life subject that students most need — and most rarely receive in a meaningful way.

What educators should want every student to know from required economics courses is nothing less than the mental framework necessary to navigate adulthood, evaluate public policy, make financial decisions, and understand why nations prosper or struggle. Economics is not simply business; it is the study of how people, families, governments, and societies make choices. A few years ago, I attended a multi-day course for high school teachers hosted by the Dallas Federal Reserve. It was an outstanding experience. Resources are there today, thank goodness!

This essay explores the essential economic understanding every student deserves — and why it matters now more than ever.


1. Scarcity, Choice, and Opportunity Cost: The Law That Governs Everything

The first truth of economics is painfully simple:
We cannot have everything we want.

Every choice is a tradeoff. Students should walk away understanding that:

  • Choosing to spend money here means not spending it there.
  • Choosing one policy means giving up another.
  • Choosing time for one activity means sacrificing time for something else.

Economics calls this opportunity cost — the value of the next best alternative you give up.

Once a student grasps this, the world becomes clearer:

  • Why governments cannot fund unlimited programs.
  • Why cities must prioritize.
  • Why individuals must budget.
  • Why nations cannot tax, borrow, or spend without consequences.

This one idea alone can save people from poor decisions, unrealistic expectations, and political manipulation.


2. How Markets Work — And What Happens When They Don’t

Every student should understand the basics of markets:

  • Supply and demand
  • Prices as signals
  • Competition as a force for innovation
  • Incentives as drivers of behavior

These are not theories — they are observable realities.

Examples:

  • When the price of lumber rises, construction slows.
  • When wages rise in one industry, workers shift into it.
  • When a product becomes scarce, people value it more.

Students should also learn about market failures, when markets do not work well:

  • Externalities (pollution)
  • Monopolies (lack of competition)
  • Public goods (national defense)
  • Information asymmetry (the mechanic knows more than the customer)

A well-educated adult should understand why some things are best left to markets, and others require collective action.


3. Money, Inflation, and the Hidden Forces That Shape Daily Life

Economics teaches students what money actually is — a medium of exchange, a store of value, a unit of account. It teaches why inflation happens, how interest rates work, and why credit matters.

This is the knowledge people most need to avoid lifelong mistakes:

  • High-interest debt
  • Payday loans
  • Adjustable-rate surprises
  • Over-borrowing
  • Misunderstanding mortgages
  • Under-saving for retirement
  • Falling for financial scams

Inflation, especially, is a quiet teacher.
Students should know:

  • Why prices rise
  • How purchasing power erodes
  • Why governments sometimes overspend
  • How central banks attempt to stabilize the economy

Without this understanding, adults become vulnerable to false promises, political slogans, and emotional decisions disguised as economic policy.


4. Government, Taxes, Debt, and the Economics of Public Choices

Students should understand how governments fund themselves:

  • income taxes
  • sales taxes
  • property taxes
  • corporate taxes
  • tariffs
  • fees and permits

They should know the difference between:

  • deficits and debt
  • mandatory vs. discretionary spending
  • expansionary vs. contractionary policy

And they should understand the consequences of borrowing:

  • interest costs
  • crowding out
  • inflationary risks
  • intergenerational burdens

A citizen who understands these concepts is harder to fool with slogans like:

  • “Free college for everyone!”
  • “We can tax the rich for everything!”
  • “Deficits don’t matter!”
  • “We can cut taxes without cutting services!”

Economics teaches that every promise has a cost — and someone must pay it.


5. Personal Finance: The Economics of Everyday Life

If there is one area where economics should be utterly practical, it is here.
Every student needs to understand:

  • budgeting
  • saving
  • compound interest
  • emergency funds
  • insurance
  • investing basics
  • retirement accounts
  • debt management
  • risk vs. reward

Without this, students walk into adulthood with no map — and they learn lessons the hard way.

One simple example:
$200 saved per month from age 22 to 65 at 7% grows to roughly $500,000.
The same $200 saved starting at age 35 grows to only ~$200,000.

Time matters.
Compounding matters.
Knowing this early changes lives.


6. Global Economics: Trade, Jobs, and National Strength

Students should understand why countries trade:

  • comparative advantage
  • specialization
  • global supply chains
  • exchange rates

They should understand what drives:

  • tariffs
  • sanctions
  • trade deficits
  • manufacturing shifts
  • labor markets

This is the foundation for understanding why:

  • some industries move overseas
  • some cities decline while others rise
  • automation replaces certain jobs
  • immigration affects labor supply
  • global shocks (like pandemics or wars) reshape economies

A student with global economic literacy is less fearful and more informed — and can better adapt to economic change.


7. Economics and Human Behavior

Economics is not just numbers — it is a window into human nature.

Students should learn:

  • why incentives matter
  • why people respond predictably to policy changes
  • why scarcity shapes decisions
  • why risk and reward are universal
  • why unintended consequences are common

For example:

  • Overly generous unemployment benefits can reduce the incentive to return to work.
  • Rent control can reduce housing supply, raising prices long-term.
  • Strict zoning can artificially inflate housing costs.
  • Tax breaks can shift business decisions but may not produce promised jobs.

Economics helps students see beyond intentions to outcomes.


8. Why Economics Matters Even More in the Age of AI

AI has changed everything — except human nature and economic reality.

AI can process data, but it cannot interpret incentives.

Only a human mind can understand why people behave as they do.

AI can forecast trends, but it cannot grasp consequences.

Consequences require judgment shaped by real-world understanding.

AI can make decisions quickly, but it cannot weigh tradeoffs ethically.

Economics teaches students how those tradeoffs work.

AI makes bad decisions faster when guided by people who don’t understand economics.

A poorly trained human with a powerful tool is dangerous.
A well-trained human with the same tool is wise.

Economics is the steadying force that helps society use AI responsibly.


Conclusion: The Blueprint for a Competent Adult

What educators want students to gain from economics is not technical jargon or narrow theories. It is an understanding of how the world works.

Economics teaches:

  • how choices shape outcomes
  • how incentives drive behavior
  • how money, markets, and governments interact
  • why prosperity is fragile and must be understood
  • how individuals, families, and nations manage limited resources
  • how to avoid financial mistakes and public illusions

If literature strengthens the mind and imagination,
and history strengthens judgment and citizenship,
economics strengthens decision-making — the backbone of adult life.

Together, they form the education every young person deserves before entering the real world. And the most important thing I hope you take away from this essay and my experience: college in general and high school in particular is where you launch into a lifetime of learning (and re-learning). Anything you see in this series that you judge you missed, go back and learn! LFM

The Mind of the Mapmaker

A collaboration between Lewis McLain & AI

Skills, Motivation, and the Capabilities Behind Accurate Mapping



Introduction: The Human Attempt to Shrink the World Into Understanding

A map seems simple at first glance: a flat surface covered with lines, shapes, labels, and colors. Yet the act of creating an accurate map is one of the most difficult intellectual tasks humans have ever attempted. Mapping demands a rare combination of observation, mathematics, engineering, imagination, artistry, philosophy, and courage. It requires a person to look at a world too large to see all at once and to represent it faithfully on something small enough to hold in the hand. Every map, whether carved on a clay tablet or drawn by satellite algorithms, is a claim about what is real and what matters.

This paper explores the mapmaker’s mind across four eras—ancient, exploratory, philosophical, and modern technological—and then strengthens that understanding through case studies and technical appendices. Throughout the narrative, one idea remains constant: accuracy is not merely a technical achievement; it is a human triumph grounded in the mapmaker’s inner capabilities.


I. Ancient Mapmakers: Building Accuracy from Memory, Observation, and Survival

For thousands of years, before the invention of compasses, sextants, or even numerals as we know them, mapmakers relied on the most fundamental tools available to any human being: their memory, their senses, and their endurance.

A Babylonian cartographer might spend long days walking field boundaries and tying lengths of rope to stakes to re-establish property lines after floods. An Egyptian “rope stretcher” could look at the shadow of a pillar, note the angle, and derive a surprisingly accurate sense of latitude and season. Polynesian navigators sensed the shape of islands from the swell of the ocean, the direction of prevailing winds, the pattern of clouds, or the flight paths of birds—even when land was hundreds of miles away. All of this happened without written language in many places, and without anything like formal mathematics.

The motivations were simple but powerful. Survival required knowing where water, game, shelter, and danger lay. Governance required knowing how much farmland belonged to whom, where the temples held jurisdiction, and how to tax agricultural output. Trade required predictable knowledge of paths, distances, and safe passages. Human curiosity played its own role as well; people have always wanted to know the shape of their world.

Accuracy in ancient mapping was limited by natural constraints. Long distances could not be measured with confidence. Longitude remained elusive for nearly all of human history. Oral traditions, though rich, introduced distortions. Political agendas often shaped borders. And yet ancient maps show remarkable competence: logical river systems, consistent directions, recognizable landforms, and surprisingly stable proportionality. Accuracy was relative to the tools available, but the intent—the desire to record reality—was the same as today.



II. Explorers and Enlightenment Surveyors: Lewis & Clark and the Birth of Scientific Mapping

The early nineteenth century introduced a new kind of cartographer: the trained surveyor who combined field observation with scientific measurement. Lewis and Clark exemplify this transition.

Armed with sextants, compasses, chronometers, astronomical tables, and notebooks filled with surveying instructions, they attempted to impose geometric precision on a landscape no European-American had ever mapped. They measured solar angles to determine latitude, recorded compass bearings at virtually every bend of the Missouri River, estimated distances by managing travel speeds, and triangulated mountain peaks whenever weather permitted. Their notebooks reveal how meticulously they checked, recalculated, and corrected their own readings.

Their motivation blended national ambition, Enlightenment science, personal curiosity, and a desire for legacy. President Jefferson viewed the expedition as a grand experiment in empirical observation and hoped to gather geographic, botanical, zoological, and ethnographic knowledge all at once. Lewis and Clark themselves were deeply committed to documenting not only what they saw but how they measured it.

Despite their tools, they faced severe limitations. Cloud cover often prevented celestial readings. Magnetic variation made some compass bearings unreliable. River distances were difficult to estimate accurately when paddling against currents. Longitudes were usually approximations, sometimes guessed, because no portable timekeeping device of the period could maintain accuracy under field conditions. Yet the map produced from their expedition defined the American West for decades, confirmed mountain ranges, captured river systems, located tribal lands, and fundamentally reshaped the geographic understanding of a continent.

Their accomplishment demonstrates that accuracy is a function not only of tools but of discipline, repetition, cross-checking, and the mental fortitude to tolerate error until it can be corrected.


III. The Philosophical Mapmaker: Understanding That a Map Is a Model, Not the World

One of the most difficult but essential truths in cartography is that a map can never be fully accurate in every dimension. A map is a model, not the thing itself. Understanding this transforms how we judge accuracy.

No map can include everything. The mapmaker must decide what to include and what to omit, what to emphasize and what to generalize. This selective process shapes meaning as much as measurement does. A map that focuses on roads sacrifices terrain; a map that shows landforms hides political boundaries; a nautical chart prioritizes depth, hazards, and tides while ignoring nearly everything inland.

Even more fundamentally, the Earth is round and a map is flat. Flattening a sphere introduces distortions in shape, area, distance, or direction. No projection solves all problems at once. The Mercator projection preserves direction for navigation but distorts the sizes of continents dramatically. Equal-area projections preserve proportional land area but contort shapes. Conic projections work beautifully for mid-latitude regions like the United States but fail near the equator and poles.

Scale introduces another layer of philosophical choice. A map of a neighborhood can show driveways, footpaths, and fire hydrants; a map of a nation must erase tens of thousands of such details. At global scale, even major rivers become thin suggestions rather than features.

Finally, maps inevitably carry bias. National borders are often political statements as much as geographic descriptions. Cultural assumptions guide what is considered important. The purpose of a map—a subway map, a floodplain map, a highway atlas—governs its priorities. Every map quietly expresses a worldview.

Thus, “accurate” does not mean “perfectly true.” It means “fit for the purpose.” A map is correct to the extent that it serves the need it was created for.



IV. The Modern Cartographer: Satellites, GIS, and the Era of Precision

The modern mapmaker operates in a world overflowing with spatial information. GPS satellites circle the earth, constantly broadcasting timing signals that allow any handheld receiver to determine position within a few meters—and survey-grade receivers to reach centimeter-level accuracy. High-resolution satellite imagery captures coastlines, forests, highways, and rooftops with astonishing clarity. LiDAR sensors measure elevation by firing millions of laser pulses per second, creating three-dimensional models of terrain. GIS (Geographic Information Systems) software organizes, analyzes, and visualizes enormous spatial datasets.

The work of the modern cartographer is less about drawing lines and more about managing data. A GIS analyst must understand spatial statistics, database schemas, metadata verification, remote sensing interpretation, coordinate transformations, and the difference between nominal, ordinal, interval, and ratio data. The skill set is analytical, computational, and scientific.

The motivations have expanded as well. Modern mapping supports transportation engineering, zoning, emergency response, flood mitigation, environmental policy, epidemiology, commercial logistics, climate science, and international security. Governments, companies, and researchers all rely on constantly updated maps to make daily decisions.

Yet the abundance of data introduces new complications. Errors no longer stem primarily from lack of information but from inconsistency among datasets, outdated imagery, automated misclassification, incorrect coordinate transformation, or the false sense of precision that digital numbers can give. Even in a world of satellites, the mapmaker must remain vigilant and skeptical. Accuracy must still be earned, not assumed.



V. Case Studies: How Real Maps Achieve Real Accuracy

The theory of mapmaking becomes clearer when examined through specific examples. Four case studies reveal how different contexts produce different solutions to the same universal problem.

Case Study 1: The USGS Topographic Map

The United States Geological Survey began producing standardized topographic maps in the late nineteenth century, combining triangulation, plane-table surveying, and field verification. Later editions incorporated aerial photography and eventually satellite data. These maps formed the spatial backbone of national development. Engineers relied on them to place highways, dams, airports, pipelines, and railroads. Hikers and outdoor enthusiasts still use them today.

Their accuracy was remarkable for their time: often within a few meters horizontally and within a meter vertically. They became the nation’s common spatial language, demonstrating how consistent methodology and repeated verification create reliability across vast geographic space.

Case Study 2: Nautical Charts and the Challenge of the Ocean

No mapping discipline demands more caution than nautical charting. Mariners depend on accurate depths, hazard markings, and tidal information. Early sailors used weighted ropes and visual triangulation to estimate depth. Today’s hydrographers use multibeam sonar, satellite altimetry, LiDAR bathymetry, and tide-corrected measurements to produce charts that can reveal underwater features with astonishing detail.

Yet the ocean floor is dynamic. Storms move sandbars. Currents reshape channels. Dredging alters harbor depths. For this reason, nautical charts are never fully “finished.” They require constant updating. The challenge is not simply measuring depth once, but sustaining accuracy in a world that changes.

Case Study 3: The London Underground Map and the Meaning of “Accuracy”

The London Tube Map, introduced by Harry Beck in 1933, revolutionized the concept of cartographic truth. Beck realized that subway riders did not need geographic precision. They needed simplicity, clarity, and relational accuracy—knowing how stations connected, not how far apart they were in miles.

By replacing geographic realism with abstract geometry, he created a map that was technically inaccurate but functionally brilliant. Nearly all subway maps worldwide now follow the same principle. This case study illustrates that the “right” map is the map that serves the user’s need, not the map that most faithfully represents ground truth.

Case Study 4: Google Maps and the Algorithmic Cartographer

Google Maps represents an entirely new form of mapping. Unlike paper maps, it is not a static depiction of geography. It is a constantly shifting model created from satellite images, aerial photos, street-level observations, user reports, and complex routing algorithms. It recalculates itself continuously, adjusting for traffic, construction, business changes, and political variations in border representation.

Its power is extraordinary, but its limitations remind us that automation cannot eliminate human judgment. The platform reflects commercial incentives, political boundaries, and the imperfections of crowdsourced information. Accuracy is high but uneven, and like the ocean charts, the system must be updated constantly to remain trustworthy.



VI. A Unified Theory of Mapmaking

Across all eras and technologies, the mapmaker’s challenge remains the same. The world is too large and too complex to be perceived directly, so the mapmaker must choose which aspects of reality to capture. Those choices—shaped by purpose, tools, knowledge, and bias—determine whether the resulting map will be useful or misleading. Measurement introduces error; projection introduces distortion; interpretation introduces judgment. Accuracy is always relative to context, intention, and method.

The mapmaker succeeds not by eliminating error altogether, but by understanding its sources, managing its influence, and balancing the competing truths that every map must negotiate.


VII. Technical Appendices

Appendix A: Coordinate Systems and Projections

Modern mapping rests on systems that allow the entire Earth to be described mathematically. Latitude and longitude divide the globe into degrees, providing a universal reference easy to conceptualize but difficult to measure perfectly at large scales. The Universal Transverse Mercator (UTM) system divides the Earth into narrow vertical zones, each of which minimizes distortion for engineering purposes. The North American Datum (NAD83) and the World Geodetic System (WGS84) provide precise mathematical models of the Earth’s shape, enabling GPS receivers to calculate location with remarkable accuracy.

Map projections translate the curved surface of the Earth to a flat plane. Each projection sacrifices something: the Mercator preserves direction but exaggerates the size of high-latitude regions; equal-area projections maintain proportional land area at the cost of distorting continents; the Robinson projection compromises carefully to create a visually balanced world. The choice of projection reflects the map’s purpose more than the mapmaker’s preference.

Appendix B: Surveying Instruments Through Time

The tools of mapping have evolved dramatically. Ancient civilizations used gnomons to measure shadows, ropes to mark distances, and rudimentary cross-staffs to gauge angles. Renaissance innovations introduced compasses, astrolabes, sextants, and the plane table, bringing scientific precision to exploration. By the eighteenth century, the theodolite allowed surveyors to measure angles with unprecedented accuracy.

Modern surveyors rely on total stations, which combine angle measurement with laser-based distance calculation; GNSS receivers capable of centimeter-level precision; LiDAR instruments that generate three-dimensional point clouds of terrain; and drones that capture aerial photographs suitable for photogrammetric reconstruction. Although the instruments have changed, the underlying goal has remained constant: to measure the Earth in a way that minimizes error and maximizes reliability.

Appendix C: Sources of Error and How Mapmakers Correct Them

Cartographic errors emerge from several sources. Positional error occurs when instrument readings or GPS signals are distorted by environmental conditions, equipment limitations, or signal reflections from buildings or terrain. Projection error arises because any flat map must distort some combination of shape, area, direction, or distance. Human interpretation error appears during the classification of aerial images or the delineation of ambiguous features. Temporal error affects maps that have not been updated to reflect natural or man-made changes.

Mapmakers mitigate these errors by using redundant measurements, cross-checking data from multiple sources, incorporating ground-truth verification, applying statistical corrections, and selecting projections tailored to the region being mapped. Accuracy is achieved not through perfection but through a disciplined process of detecting, bounding, and correcting inevitable imperfections.


Conclusion: The Eternal Mind Behind the Map

From a Babylonian surveyor tying knots in a rope, to a Polynesian navigator reading waves in the dark, to Lewis and Clark marking compass bearings along unknown rivers, to a modern GIS analyst adjusting satellite layers on a computer screen, the mapmaker’s mind has never changed in its essential character. The world is too vast, varied, and dynamic to be seen directly, so we create representations—models that reveal structure, meaning, and relationship.

A map is not merely a depiction of space. It is a human judgment about what matters. Every accurate map represents a triumph of curiosity over ignorance, order over chaos, and understanding over confusion. The tools are part of the story, but the deeper story is the capability of the person wielding them: the patience to measure carefully, the discipline to verify and correct, the imagination to translate complexity into clarity, and the humility to know that no map is final, complete, or perfect.

Mapmaking is the oldest form of reasoning about the world, and perhaps the most enduring. To draw a map is to make the world legible. To understand a map is to understand the choices of the person who created it. And to appreciate accuracy is to recognize that behind every line lies a mind trying to grasp the infinite.