Elvis Presley on his birthday

A collaboration between Lewis McLain & AI

https://cdn.europosters.eu/image/750/134333.jpg
https://www.rollingstone.com/wp-content/uploads/2018/06/gettyimages-74289871-806b1297-9a36-4c93-8bc9-3850fdfae70e.jpg
https://upload.wikimedia.org/wikipedia/commons/5/54/Graceland_Memphis_Tennessee.jpg

Elvis Presley: A Birthday Reflection on the King Who Changed the Sound of America


The only reason I remember Elvis’s birthday is that it is the same as my brother we lost 10 years ago. https://citybaseblog.net/2016/03/12/thinking-about-my-bro

January 8 marks the birthday of Elvis Presley, born in 1935 in Tupelo, Mississippi—one half of a pair of twins, the other lost at birth. That quiet fact matters. Elvis always carried the gravity of absence and longing, and it surfaced in his voice long before the world learned his name. Today, remembering Elvis isn’t just about swiveling hips or rhinestone jumpsuits. It’s about a cultural detonation that permanently altered music, identity, and the idea of what American sound could be.

Elvis arrived at a strange intersection in history. America was prosperous, anxious, segregated, and restless. Radio waves were neatly categorized: country over here, blues over there, pop kept clean and polite. Elvis crossed those lines without asking permission. He absorbed gospel harmonies from church pews, blues from Beale Street, country from the Grand Ole Opry, and then—almost accidentally—became the bridge. His early recordings at Sun Studio weren’t polished statements; they were experiments that crackled with risk. When he sang, genres stopped behaving.

What unsettled people wasn’t just the music. It was embodiment. Elvis didn’t perform songs so much as inhabit them. His voice could sound wounded and defiant in the same breath. His movements—so often reduced to caricature—were actually an expression of rhythm learned from Black musicians whose physicality had long been policed. To some, Elvis looked dangerous. To others, liberating. That tension is exactly why he mattered.

Fame, of course, is a blunt instrument. By the late 1950s, Elvis was everywhere—movies, merchandise, magazine covers—yet increasingly constrained. The U.S. Army drafted him in 1958, a moment that symbolically pressed the rebel into uniform. When he returned, the music softened. Hollywood took over. The edges dulled. Many artists would have faded quietly into nostalgia at that point.

Elvis didn’t.

The 1968 Comeback Special remains one of the great resurrection moments in American pop culture. Dressed in black leather, stripped of spectacle, Elvis stood close to the audience and sang as if reminding himself who he was. No choreography, no cinematic gloss—just presence. The voice was older, deeper, seasoned by disappointment. It wasn’t a return to youth; it was a confrontation with time. Few artists ever reclaim themselves so publicly.

The 1970s brought both triumph and tragedy. Vegas shows grew grand and exhausting. The jumpsuits glittered brighter as the man inside struggled. Elvis became a symbol of excess even as he remained, paradoxically, deeply shy and generous. He gave away cars, paid strangers’ medical bills, and carried a private spiritual hunger that never quite settled. America watched his decline with the same appetite that once celebrated his rise—an uncomfortable mirror held up to celebrity itself.

Elvis died in 1977 at just 42 years old, but death did not quiet him. His music still moves through culture like a low-frequency hum. Every genre-mixing artist owes him a debt. Every performer who dares to be both vulnerable and electric walks in his shadow. He did not invent rock and roll—but he translated it, amplified it, and delivered it to a nation not yet ready to hear itself reflected so honestly.

On his birthday, Elvis feels less like a relic and more like a reminder. Art is dangerous when it crosses boundaries. Beauty often comes mixed with cost. And sometimes a voice appears at exactly the right moment—not to soothe a culture, but to shake it awake.

Elvis didn’t just sing America. He revealed it.

“Be Still, and Know That I Am God”

A collaboration between Lewis McLain & AI


Today is the 40th anniversary of a day I wish had not happened. We were overly involved in giving Marriage Encounter weekends, chairing the PTA in our school, being almost a full-time volunteer at a new church we were building, running a business and raising a teenager. I had worked on a church finance report at my office, went home at 2:30 am, and was on a 7 am flight for an all-day meeting in College Station. I got home that night and could not remember a single thing about the meeting or even flying. It was days before I could recover from a meltdown, not of anger but disgust with myself for trying to be everything for everybody.

I am stealing Linda’s favorite Bible verse today. She repeats it very often and has it placed in a prominent spot in our house. It has become one of my favs, too. She repeats it in common voice. I read it as BE STILL! And know that I am God.

“Be still, and know that I am God.” The sentence is short, balanced, almost deceptively simple. Yet for centuries it has carried the weight of wars, exile, fear, worship, and quiet trust. Found in Psalm 46, this line is not a gentle suggestion whispered to people lounging in peace. It is a command spoken into chaos.

Psalm 46 opens with motion and violence: the earth giving way, mountains falling into the sea, waters roaring and foaming, nations in uproar, kingdoms tottering. The psalmist piles instability upon instability until the world feels unmoored. Only then comes the command: Be still. In the original Hebrew, the phrase carries the sense of “cease,” “let go,” or even “drop your weapons.” It is not passive calm; it is the deliberate ending of frantic striving. God is not saying, “Relax, nothing matters.” He is saying, “Stop acting as though everything depends on you.”

That context matters. This verse is often lifted out and framed as a personal mantra for stress relief, and it certainly speaks to the anxious heart. But originally it is cosmic in scale. God addresses the nations themselves—armies, rulers, systems, and powers—telling them to stand down and recognize who truly governs history. Human noise does not unsettle Him. Political turbulence does not confuse Him. Natural disasters do not surprise Him. Stillness is not for God’s benefit; it is for ours, so that recognition can happen.

To “know” God here is not mere intellectual assent. In biblical language, knowing is relational and experiential. It is the difference between reading about fire and feeling its warmth. Stillness creates the conditions for that knowledge. When activity, argument, fear, and self-justification pause, awareness sharpens. The mind stops racing long enough to perceive what was already true: God is present, sovereign, and unthreatened.

The psalm balances this command with reassurance. Just a few verses earlier we read that God is “an ever-present help in trouble.” Stillness is not abandonment. It is trust enacted. It is the refusal to panic as a form of faith. The river that “makes glad the city of God” flows quietly even as nations rage. The contrast is intentional. God’s sustaining power does not roar; it endures.

Across Scripture, this pattern repeats. Stillness precedes revelation. Moses stands at the Red Sea with no visible escape. Elijah hears God not in wind or earthquake or fire, but in a low whisper. Jesus sleeps in a storm while seasoned fishermen panic, then rises and stills the waves with a word. In each case, divine authority is revealed not through frantic motion but through unshakable calm.

In modern life, stillness is countercultural. We reward speed, productivity, instant reaction, and constant commentary. Silence feels unproductive, even irresponsible. Yet Psalm 46 insists that some truths cannot be grasped while running. Knowing God requires space—space for listening, space for humility, space for surrender. Stillness becomes an act of resistance against the illusion of control.

The verse ends with a promise: “I will be exalted among the nations, I will be exalted in the earth.” God’s sovereignty is not fragile. It does not depend on our vigilance or anxiety. History bends toward His purposes whether we strain ourselves or rest in Him. Stillness does not delay His work; it aligns us with it.

“Be still, and know that I am God” is therefore both comfort and confrontation. It comforts the weary by lifting the burden of omnipotence from human shoulders. It confronts the proud by exposing how much noise we make to avoid surrender. In stillness, excuses fall away. What remains is God—present, powerful, and worthy of trust.

The strange irony is that the world does not become quieter when we obey this command. Wars may still rage. Markets may still swing. Illness may still come. But the soul grows anchored. Stillness does not change circumstances first; it changes perception. And with that change comes a steadiness that no external upheaval can easily steal.

In the end, the verse does not invite escape from reality. It invites deeper engagement with it—rooted not in fear or frenzy, but in the knowledge that God is God, and we are not.


A small confession: I was up until 2:30 this morning working on a project and loving it. Some of us never learn. LFM

Epiphany

A collaboration between Lewis McLain & AI


It began with the sound of rain.

Not the violent kind that rattles windows and demands attention, but the kind that seems to think—pausing, resuming, whispering to itself. The rain had followed him down the street and into the old stone church, where it softened into echoes and silence.

He had not planned to stay. The church was only a shortcut between the office and the parking lot, a dry passage through a wet afternoon. But something slowed him. He found himself in the back pew, coat still damp, listening to the hush settle around him as the last of the lights were switched off one row at a time.

The nave held the faint scent of incense and old stone—memory suspended in air. In the stillness, he could feel his own breathing again, and beneath it the steady, stubborn rhythm of his heart, like a clock that had kept time through disappointment without ever being consulted.


The week had been heavy in ways that never show up on calendars or balance sheets. A conversation delayed too long. A letter unopened on the kitchen table. A friendship fractured not by malice but by neglect. He had lived lately by screens and schedules, moving efficiently while drifting inwardly, performing life rather than inhabiting it.

When the rain began earlier that afternoon, it felt as though the world had decided to mourn first.

He looked toward the altar. It was plain—no ornament, no spectacle. A linen cloth folded with care. Above it, a wooden cross, worn smooth by time and eyes. The figure upon it was neither triumphant nor dramatic. It looked tired. Human.

In that weariness, he recognized something familiar.


Lightning flared suddenly through the stained glass, flooding the nave with color for a heartbeat—reds and blues and golds briefly made whole. In that instant, he noticed a woman kneeling several pews ahead of him.

She hadn’t been there before. Or perhaps she had, and he had not been ready to see her.

She wasn’t praying with folded hands but with palms open, resting lightly on her knees, as though offering something invisible. When the light faded and the thunder rolled, she did not move.

The storm continued its rhythm, and the building seemed to breathe with it: thunder, pause, rain, silence.

A word surfaced in his mind—epiphany. A word he remembered from long ago, defined as a sudden revelation, a moment when something hidden becomes visible. A manifestation. An appearing.

For the first time in years, he wondered whether such moments still happened—not in Scripture or spectacle, but quietly, woven into ordinary time.


He closed his eyes.

The air smelled of damp stone and candle wax. Images rose without invitation: his father’s laughter, the sterile light of a hospital room, the way a lake turned silver just before sunset. A stranger’s voice from years ago, saying, You look like someone still searching.

His life felt layered, translucent, as though meaning had always been present but partially obscured. One layer lifted, then another—not by effort, but by grace.

When he opened his eyes, the woman was gone.

Only her umbrella leaned against the front pew.

He stood and walked forward, intending to return it if she was still nearby. As he approached, something inside him loosened—a knot he hadn’t known how to name. The familiar tension between doing and being, between guilt and mercy, softened.

The umbrella was patterned with constellations. When he lifted it, droplets slid across the fabric like falling stars.


Outside, the storm had broken.

The air was sharp with ozone and freshness. Streetlights shimmered on wet pavement. Cars hissed past, ordinary and miraculous at once. Across the street, a diner sign flickered OPEN—half the letters burned out, yet unmistakable.

He laughed quietly. Even broken, it told the truth.

Inside, the waitress poured him coffee without asking. The woman from the church sat near the window, stirring her tea. She glanced up, smiled faintly, and nodded.

No words passed between them. None were required.

He sipped the coffee. The city hummed like an organ warming up. Outside, clouds thinned, and the first ribbon of sunrise touched the street. It caught the rim of his cup, the chrome of the jukebox, and the tear he hadn’t noticed had fallen.

Everything aligned—not as an explanation, but as a recognition.

The rain. The church. The cross. The lightning. The diner’s broken sign.

Not revelation in thunder. Not truth carved in stone.

Just the world, quietly saying: I am here.


When he left the diner, he didn’t take the umbrella.

He wanted to feel the light on his face.

The city resumed its noise—engines, voices, footsteps. Nothing had changed, and yet everything had. He carried no answers, no resolutions, no plans—only a stillness, warm and steady, glowing just behind his ribs.

He was no longer alone in the silence.

As he turned the corner, he thought again of the woman and the umbrella left behind.

Why hadn’t he given it back?
The question rose naturally, as it might in the reader’s own mind.

Perhaps because she hadn’t truly forgotten it.
Perhaps because some gifts aren’t meant to be returned.

The umbrella had done its work—a small constellation pointing toward a larger one, a reminder that revelation often leaves something behind.

Something you don’t need to keep
in order to remember.


Epilogue

Epiphany is a word that means “to appear.”

But perhaps its truer meaning is this:
to notice.

For the divine has always been appearing. The shepherds came to see the Baby.
It is we who, at last, learn to look.

January 5, 1933 — Steel, Strain, and the Day the Golden Gate Became Measurable

A collaboration between Lewis McLain & AI

https://www.pbs.org/wgbh/americanexperience/media/filer_public_thumbnails/filer_public/eb/e0/ebe02301-90e7-4abd-941b-dbf4300d2d3e/goldengate_spinning.jpg__400x567_q85_crop_subsampling-2_upscale.jpg
https://www.goldengate.org/assets/1/6/ggb-exhibit2-3_1.jpg

January 5, 1933 was the day the Golden Gate Bridge stopped being an argument and became a set of numbers, tolerances, stresses, and human limits. It was the day the bridge entered the physical world—where ideas are tested not by opinion, but by wind, gravity, and steel stretched to its breaking point.

This is what made that decision extraordinary: nothing at this scale had ever been built under conditions like these.


Before January 5: a problem defined by physics

The Golden Gate strait is not merely wide; it is hostile. The main span would need to cross 4,200 feet of open water—longer than any suspension bridge span in the world at the time. The water below reached depths of over 300 feet, with tidal currents exceeding 7 knots. Winds routinely pushed 50–60 mph through the narrow opening. Add fog, salt corrosion, and an active earthquake zone, and the engineering margins grew thin fast.

Critics were not being timid. They were doing the math.


The decision to build anyway

The project moved forward under the leadership of Joseph Strauss, supported by local governments and financed—barely—when Amadeo Giannini personally backed the bonds. The theoretical backbone of the structure came from calculations performed largely by Charles Alton Ellis, whose work translated vision into equations that said, yes, this can stand.

On January 5, those equations met the bay.


Foundations: anchoring the impossible

The first technical challenge was not the span—it was the towers.

Each tower would rise 746 feet above the water, taller than most buildings of the era. To support them, crews sank massive foundations into bedrock far below the surface. This required working from floating platforms, battling currents that could push equipment sideways and make precise placement nearly impossible.

The south tower’s foundation alone weighed over 60,000 tons once complete.


Towers first, cables later

Construction followed a strict sequence:

  1. Foundations and piers
  2. Steel towers
  3. Main suspension cables
  4. Vertical suspender ropes
  5. Roadway deck

The towers were built section by section, steel plates riveted together in midair. Workers balanced on beams sometimes no wider than a boot sole, aligning steel to tolerances measured in fractions of an inch—because errors multiplied dramatically across a mile-long span.


The cables: strength measured in thousands of miles

The bridge’s two main cables remain its most astonishing technical feat.

Each cable:

  • Measures 36⅜ inches in diameter
  • Contains 27,572 individual steel wires
  • Each wire is 0.192 inches thick
  • Total wire length per cable: ~80,000 miles
  • Combined wire length: enough to wrap around the Earth more than three times

The wires were not prefabricated. They were spun in place using a moving wheel that carried each wire back and forth across the span, one at a time. Cable spinning took six months, with workers exposed to wind, fog, and vertigo at all hours.

Each cable supports a load of roughly 25,000 tons.


Tension, strain, and living steel

Steel under tension behaves differently than steel at rest. Engineers calculated:

  • Dead load (weight of the bridge itself)
  • Live load (traffic, pedestrians, wind)
  • Dynamic wind loading
  • Seismic forces

At full load, the bridge can sway up to 27 feet laterally. This is not a flaw. It is survival. Rigidity would have meant failure.

The bridge is designed to move, not resist movement.


Safety engineered into the process

Strauss insisted on safety innovations unheard of at the time:

  • Mandatory hard hats
  • Lifelines and handrails
  • On-site medical staff
  • And the massive safety net beneath the deck

The net saved 19 men—the “Halfway-to-Hell Club.” Eleven still died, ten in a single accident when a scaffold tore through the net. Even so, the fatality rate was far lower than comparable projects of the era.

Safety, for once, was treated as a technical requirement—not an afterthought.


The roadway: hanging a city street in the air

The deck was assembled in prefabricated sections and lifted into place by cranes mounted on the cables themselves. Once attached, vertical suspender ropes transferred load from deck to cable, distributing weight evenly.

Final dimensions:

  • Total length: 8,981 feet
  • Main span: 4,200 feet
  • Width: 90 feet
  • Clearance above water: 220 feet

Every number mattered. Change one, and the system changed everywhere.


After January 5: proof through survival

When the bridge opened in 1937, it immediately carried traffic loads no one fully anticipated. It later survived:

  • The 1957 Fort Point earthquake
  • The 1989 Loma Prieta earthquake
  • Constant wind cycles for nearly a century

Its survival validated not only the structure, but the philosophy behind it: design for movement, design for uncertainty, design for people.


What January 5 ultimately celebrates

January 5 is not a date about ribbon-cutting. It is about committing to numbers you cannot yet test and trusting human skill to meet them.

It honors the moment when:

  • theory met tide,
  • equations met wind,
  • safety met necessity,
  • and steel was asked to behave like a living thing.

The Golden Gate Bridge did not begin as poetry.
It began as calculations, rivets, wire under strain, and men willing to trust both.

That trust—measured in miles, tons, inches, and lives—is what January 5 truly marks.

Artificial Intelligence in City Government: From Adoption to Accountability

A Practical Framework for Innovation, Oversight, and Public Trust

A collaboration between Lewis McLain & AI – A Companion to the previous blog on AI

Artificial intelligence has moved from novelty to necessity in public institutions. What began as experimental tools for drafting documents or summarizing data is now embedded in systems that influence budgeting, service delivery, enforcement prioritization, procurement screening, and public communication. Cities are discovering that AI is no longer optional—but neither is governance.

This essay unifies two truths that are often treated as competing ideas but must now be held together:

  1. AI adoption is inevitable and necessary if cities are to remain operationally effective and fiscally sustainable.
  2. AI oversight is now unavoidable wherever systems influence decisions affecting people, rights, or public trust.

These are not contradictions. They are sequential realities. Adoption without governance leads to chaos. Governance without adoption leads to irrelevance. The task for modern city leadership is to do both—intentionally.

I. The Adoption Imperative: AI as Municipal Infrastructure

Cities face structural pressures that are not temporary: constrained budgets, difficulty recruiting and retaining staff, growing service demands, and rising analytical complexity. AI tools offer a way to expand institutional capacity without expanding payrolls at the same rate.

Common municipal uses already include:

  • Drafting ordinances, reports, and correspondence
  • Summarizing public input and staff analysis
  • Forecasting revenues, expenditures, and service demand
  • Supporting customer service through chat or triage tools
  • Enhancing internal research and analytics

In this sense, AI is not a gadget. It is infrastructure, comparable to ERP systems, GIS, or financial modeling platforms. Cities that delay adoption will find themselves less capable, less competitive, and more expensive to operate.

Adoption, however, is not merely technical. AI reshapes workflows, compresses tasks, and changes how work is performed. Over time, this may alter staffing needs. The question is not whether AI will change city operations—it already is. The question is whether those changes are guided or accidental.

II. The Oversight Imperative: Why Governance Is Now Required

As AI systems move beyond internal productivity and begin to influence decisions—directly or indirectly—oversight becomes essential.

AI systems are now used, or embedded through vendors, in areas such as:

  • Permit or inspection prioritization
  • Eligibility screening for programs or services
  • Vendor risk scoring and procurement screening
  • Enforcement triage
  • Public safety analytics

When AI recommendations shape outcomes, even if a human signs off, accountability cannot be vague. Errors at scale, opaque logic, and undocumented assumptions create legal exposure and erode public trust faster than traditional human error.

Oversight is required because:

  • Scale magnifies mistakes: a single flaw can affect thousands before detection.
  • Opacity undermines legitimacy: residents are less forgiving of decisions they cannot understand.
  • Legal scrutiny is increasing: courts and legislatures are paying closer attention to algorithmic decision-making.

Oversight is not about banning AI. It is about ensuring AI is used responsibly, transparently, and under human control.

III. Bridging Adoption and Oversight: A Two-Speed Framework

The tension between “move fast” and “govern carefully” dissolves once AI uses are separated by risk.

Low-Risk, Internal AI Uses

Examples include drafting, summarization, forecasting, research, and internal analytics.

Approach:
Adopt quickly, document lightly, train staff, and monitor outcomes.

Decision-Adjacent or High-Risk AI Uses

Examples include enforcement prioritization, eligibility determinations, public safety analytics, and procurement screening affecting vendors.

Approach:
Require review, documentation, transparency, and meaningful human oversight before deployment.

This two-speed framework allows cities to capture productivity benefits immediately while placing guardrails only where risk to rights, equity, or trust is real.

IV. Texas Context: Statewide Direction on AI Governance

The Texas Legislature reinforced this balanced approach through the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026. The law does not prohibit AI use. Instead, it establishes expectations for transparency, accountability, and prohibited practices—particularly for government entities.

Key elements include:

  • Disclosure when residents interact with AI systems
  • Prohibitions on social scoring by government
  • Restrictions on discriminatory AI use
  • Guardrails around biometric and surveillance applications
  • Civil penalties for unlawful or deceptive deployment
  • Creation of a statewide Artificial Intelligence Council

The message is clear: Texas expects governments to adopt AI responsibly—neither recklessly nor fearfully.

V. Implications for Cities and Transit Agencies

Cities are already using AI, often unknowingly, through vendor-provided software. Transit agencies face elevated exposure because they combine finance, enforcement, surveillance, and public safety.

The greatest risk is not AI itself, but uncontrolled AI:

  • Vendor-embedded algorithms without disclosure
  • No documented human accountability
  • No audit trail
  • No process for suspension or correction

Cities that act early reduce legal risk, preserve public trust, and maintain operational flexibility.

VI. Workforce Implications: Accurate and Defensible Language

AI will change how work is done over time. It would be inaccurate and irresponsible to claim otherwise.

At the same time, AI does not mandate immediate workforce reductions. In public institutions, workforce impacts—if they occur—are most likely to happen gradually through:

  • Attrition
  • Reassignment
  • Retraining
  • Role redesign

Final staffing decisions remain with City leadership and City Council. AI is a tool for improving capacity and sustainability, not an automatic trigger for reductions.

Conclusion: Coherent, Accountable AI

AI adoption without governance invites chaos. Governance without adoption invites stagnation. Cities that succeed will do both—moving quickly where risk is low and governing carefully where risk is high.

This is not about technology hype. It is about institutional competence in a digital age.


Appendix 1 — Texas Responsible Artificial Intelligence Governance Act (HB 149)

Legislature Online

                                                   H.B. No. 149

AN ACT

relating to regulation of the use of artificial intelligence systems in this state; providing civil penalties.

BE IT ENACTED BY THE LEGISLATURE OF THE STATE OF TEXAS:

SECTION 1.  This Act may be cited as the Texas Responsible Artificial Intelligence Governance Act.

SECTION 2.  Section 503.001, Business & Commerce Code, is amended by amending Subsections (a) and (e) and adding Subsections (b-1) and (f) to read as follows:

(a)  In this section:

(1)  “Artificial intelligence system” has the meaning assigned by Section 551.001.

(2)  “Biometric identifier” means a retina or iris scan, fingerprint, voiceprint, or record of hand or face geometry.

(b-1)  For purposes of Subsection (b), an individual has not been informed of and has not provided consent for the capture or storage of a biometric identifier of an individual for a commercial purpose based solely on the existence of an image or other media containing one or more biometric identifiers of the individual on the Internet or other publicly available source unless the image or other media was made publicly available by the individual to whom the biometric identifiers relate.

(e)  This section does not apply to:

(1)  voiceprint data retained by a financial institution or an affiliate of a financial institution, as those terms are defined by 15 U.S.C. Section 6809;

(2)  the training, processing, or storage of biometric identifiers involved in developing, training, evaluating, disseminating, or otherwise offering artificial intelligence models or systems, unless a system is used or deployed for the purpose of uniquely identifying a specific individual; or

(3)  the development or deployment of an artificial intelligence model or system for the purposes of:

(A)  preventing, detecting, protecting against, or responding to security incidents, identity theft, fraud, harassment, malicious or deceptive activities, or any other illegal activity;

(B)  preserving the integrity or security of a system; or

(C)  investigating, reporting, or prosecuting a person responsible for a security incident, identity theft, fraud, harassment, a malicious or deceptive activity, or any other illegal activity.

(f)  If a biometric identifier captured for the purpose of training an artificial intelligence system is subsequently used for a commercial purpose not described by Subsection (e), the person possessing the biometric identifier is subject to:

(1)  this section’s provisions for the possession and destruction of a biometric identifier; and

(2)  the penalties associated with a violation of this section.

SECTION 3.  Section 541.104(a), Business & Commerce Code, is amended to read as follows:

(a)  A processor shall adhere to the instructions of a controller and shall assist the controller in meeting or complying with the controller’s duties or requirements under this chapter, including:

(1)  assisting the controller in responding to consumer rights requests submitted under Section 541.051 by using appropriate technical and organizational measures, as reasonably practicable, taking into account the nature of processing and the information available to the processor;

(2)  assisting the controller with regard to complying with requirements relating to the security of processing personal data, and if applicable, the personal data collected, stored, and processed by an artificial intelligence system, as that term is defined by Section 551.001, and to the notification of a breach of security of the processor’s system under Chapter 521, taking into account the nature of processing and the information available to the processor; and

(3)  providing necessary information to enable the controller to conduct and document data protection assessments under Section 541.105.

SECTION 4.  Title 11, Business & Commerce Code, is amended by adding Subtitle D to read as follows:

SUBTITLE D.  ARTIFICIAL INTELLIGENCE PROTECTION

CHAPTER 551.  GENERAL PROVISIONS

Sec. 551.001.  DEFINITIONS.  In this subtitle:

(1)  “Artificial intelligence system” means any machine-based system that, for any explicit or implicit objective, infers from the inputs the system receives how to generate outputs, including content, decisions, predictions, or recommendations, that can influence physical or virtual environments.

(2)  “Consumer” means an individual who is a resident of this state acting only in an individual or household context.  The term does not include an individual acting in a commercial or employment context.

(3)  “Council” means the Texas Artificial Intelligence Council established under Chapter 554.

Sec. 551.002.  APPLICABILITY OF SUBTITLE.  This subtitle applies only to a person who:

(1)  promotes, advertises, or conducts business in this state;

(2)  produces a product or service used by residents of this state; or

(3)  develops or deploys an artificial intelligence system in this state.

Sec. 551.003.  CONSTRUCTION AND APPLICATION OF SUBTITLE.  This subtitle shall be broadly construed and applied to promote its underlying purposes, which are to:

(1)  facilitate and advance the responsible development and use of artificial intelligence systems;

(2)  protect individuals and groups of individuals from known and reasonably foreseeable risks associated with artificial intelligence systems;

(3)  provide transparency regarding risks in the development, deployment, and use of artificial intelligence systems; and

(4)  provide reasonable notice regarding the use or contemplated use of artificial intelligence systems by state agencies.

CHAPTER 552.  ARTIFICIAL INTELLIGENCE PROTECTION

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 552.001.  DEFINITIONS.  In this chapter:

(1)  “Deployer” means a person who deploys an artificial intelligence system for use in this state.

(2)  “Developer” means a person who develops an artificial intelligence system that is offered, sold, leased, given, or otherwise provided in this state.

(3)  “Governmental entity” means any department, commission, board, office, authority, or other administrative unit of this state or of any political subdivision of this state, that exercises governmental functions under the authority of the laws of this state.  The term does not include:

(A)  a hospital district created under the Health and Safety Code or Article IX, Texas Constitution; or

(B)  an institution of higher education, as defined by Section 61.003, Education Code, including any university system or any component institution of the system.

Sec. 552.002.  CONSTRUCTION OF CHAPTER.  This chapter may not be construed to:

(1)  impose a requirement on a person that adversely affects the rights or freedoms of any person, including the right of free speech; or

(2)  authorize any department or agency other than the Department of Insurance to regulate or oversee the business of insurance.

Sec. 552.003.  LOCAL PREEMPTION.  This chapter supersedes and preempts any ordinance, resolution, rule, or other regulation adopted by a political subdivision regarding the use of artificial intelligence systems.

SUBCHAPTER B. DUTIES AND PROHIBITIONS ON USE OF ARTIFICIAL INTELLIGENCE

Sec. 552.051.  DISCLOSURE TO CONSUMERS.  (a)  In this section, “health care services” means services related to human health or to the diagnosis, prevention, or treatment of a human disease or impairment provided by an individual licensed, registered, or certified under applicable state or federal law to provide those services.

(b)  A governmental agency that makes available an artificial intelligence system intended to interact with consumers shall disclose to each consumer, before or at the time of interaction, that the consumer is interacting with an artificial intelligence system.

(c)  A person is required to make the disclosure under Subsection (b) regardless of whether it would be obvious to a reasonable consumer that the consumer is interacting with an artificial intelligence system.

(d)  A disclosure under Subsection (b):

(1)  must be clear and conspicuous;

(2)  must be written in plain language; and

(3)  may not use a dark pattern, as that term is defined by Section 541.001.

(e)  A disclosure under Subsection (b) may be provided by using a hyperlink to direct a consumer to a separate Internet web page.

(f)  If an artificial intelligence system is used in relation to health care service or treatment, the provider of the service or treatment shall provide the disclosure under Subsection (b) to the recipient of the service or treatment or the recipient’s personal representative not later than the date the service or treatment is first provided, except in the case of emergency, in which case the provider shall provide the required disclosure as soon as reasonably possible.

Sec. 552.052.  MANIPULATION OF HUMAN BEHAVIOR.  A person may not develop or deploy an artificial intelligence system in a manner that intentionally aims to incite or encourage a person to:

(1)  commit physical self-harm, including suicide;

(2)  harm another person; or

(3)  engage in criminal activity.

Sec. 552.053.  SOCIAL SCORING.  A governmental entity may not use or deploy an artificial intelligence system that evaluates or classifies a natural person or group of natural persons based on social behavior or personal characteristics, whether known, inferred, or predicted, with the intent to calculate or assign a social score or similar categorical estimation or valuation of the person or group of persons that results or may result in:

(1)  detrimental or unfavorable treatment of a person or group of persons in a social context unrelated to the context in which the behavior or characteristics were observed or noted;

(2)  detrimental or unfavorable treatment of a person or group of persons that is unjustified or disproportionate to the nature or gravity of the observed or noted behavior or characteristics; or

(3)  the infringement of any right guaranteed under the United States Constitution, the Texas Constitution, or state or federal law.

Sec. 552.054.  CAPTURE OF BIOMETRIC DATA.  (a)  In this section, “biometric data” means data generated by automatic measurements of an individual’s biological characteristics.  The term includes a fingerprint, voiceprint, eye retina or iris, or other unique biological pattern or characteristic that is used to identify a specific individual.  The term does not include a physical or digital photograph or data generated from a physical or digital photograph, a video or audio recording or data generated from a video or audio recording, or information collected, used, or stored for health care treatment, payment, or operations under the Health Insurance Portability and Accountability Act of 1996 (42 U.S.C. Section 1320d et seq.).

(b)  A governmental entity may not develop or deploy an artificial intelligence system for the purpose of uniquely identifying a specific individual using biometric data or the targeted or untargeted gathering of images or other media from the Internet or any other publicly available source without the individual’s consent, if the gathering would infringe on any right of the individual under the United States Constitution, the Texas Constitution, or state or federal law.

(c)  A violation of Section 503.001 is a violation of this section.

Sec. 552.055.  CONSTITUTIONAL PROTECTION.  (a)  A person may not develop or deploy an artificial intelligence system with the sole intent for the artificial intelligence system to infringe, restrict, or otherwise impair an individual’s rights guaranteed under the United States Constitution.

(b)  This section is remedial in purpose and may not be construed to create or expand any right guaranteed by the United States Constitution.

Sec. 552.056.  UNLAWFUL DISCRIMINATION.  (a)  In this section:

(1)  “Financial institution” has the meaning assigned by Section 201.101, Finance Code.

(2)  “Insurance entity” means:

(A)  an entity described by Section 82.002(a), Insurance Code;

(B)  a fraternal benefit society regulated under Chapter 885, Insurance Code; or

(C)  the developer of an artificial intelligence system used by an entity described by Paragraph (A) or (B).

(3)  “Protected class” means a group or class of persons with a characteristic, quality, belief, or status protected from discrimination by state or federal civil rights laws, and includes race, color, national origin, sex, age, religion, or disability.

(b)  A person may not develop or deploy an artificial intelligence system with the intent to unlawfully discriminate against a protected class in violation of state or federal law.

(c)  For purposes of this section, a disparate impact is not sufficient by itself to demonstrate an intent to discriminate.

(d)  This section does not apply to an insurance entity for purposes of providing insurance services if the entity is subject to applicable statutes regulating unfair discrimination, unfair methods of competition, or unfair or deceptive acts or practices related to the business of insurance.

(e)  A federally insured financial institution is considered to be in compliance with this section if the institution complies with all federal and state banking laws and regulations.

Sec. 552.057.  CERTAIN SEXUALLY EXPLICIT CONTENT AND CHILD PORNOGRAPHY.  A person may not:

(1)  develop or distribute an artificial intelligence system with the sole intent of producing, assisting or aiding in producing, or distributing:

(A)  visual material in violation of Section 43.26, Penal Code; or

(B)  deep fake videos or images in violation of Section 21.165, Penal Code; or

(2)  intentionally develop or distribute an artificial intelligence system that engages in text-based conversations that simulate or describe sexual conduct, as that term is defined by Section 43.25, Penal Code, while impersonating or imitating a child younger than 18 years of age.

SUBCHAPTER C.  ENFORCEMENT

Sec. 552.101.  ENFORCEMENT AUTHORITY.  (a)  The attorney general has exclusive authority to enforce this chapter, except to the extent provided by Section 552.106.

(b)  This chapter does not provide a basis for, and is not subject to, a private right of action for a violation of this chapter or any other law.

Sec. 552.102.  INFORMATION AND COMPLAINTS.  The attorney general shall create and maintain an online mechanism on the attorney general’s Internet website through which a consumer may submit a complaint under this chapter to the attorney general.

Sec. 552.103.  INVESTIGATIVE AUTHORITY.  (a)  If the attorney general receives a complaint through the online mechanism under Section 552.102 alleging a violation of this chapter, the attorney general may issue a civil investigative demand to determine if a violation has occurred.  The attorney general shall issue demands in accordance with and under the procedures established under Section 15.10.

(b)  The attorney general may request from the person reported through the online mechanism, pursuant to a civil investigative demand issued under Subsection (a):

(1)  a high-level description of the purpose, intended use, deployment context, and associated benefits of the artificial intelligence system with which the person is affiliated;

(2)  a description of the type of data used to program or train the artificial intelligence system;

(3)  a high-level description of the categories of data processed as inputs for the artificial intelligence system;

(4)  a high-level description of the outputs produced by the artificial intelligence system;

(5)  any metrics the person uses to evaluate the performance of the artificial intelligence system;

(6)  any known limitations of the artificial intelligence system;

(7)  a high-level description of the post-deployment monitoring and user safeguards the person uses for the artificial intelligence system, including, if the person is a deployer, the oversight, use, and learning process established by the person to address issues arising from the system’s deployment; or

(8)  any other relevant documentation reasonably necessary for the attorney general to conduct an investigation under this section.

Sec. 552.104.  NOTICE OF VIOLATION; OPPORTUNITY TO CURE.  (a)  If the attorney general determines that a person has violated or is violating this chapter, the attorney general shall notify the person in writing of the determination, identifying the specific provisions of this chapter the attorney general alleges have been or are being violated.

(b)  The attorney general may not bring an action against the person:

(1)  before the 60th day after the date the attorney general provides the notice under Subsection (a); or

(2)  if, before the 60th day after the date the attorney general provides the notice under Subsection (a), the person:

(A)  cures the identified violation; and

(B)  provides the attorney general with a written statement that the person has:

(i)  cured the alleged violation;

(ii)  provided supporting documentation to show the manner in which the person cured the violation; and

(iii)  made any necessary changes to internal policies to reasonably prevent further violation of this chapter.

Sec. 552.105.  CIVIL PENALTY; INJUNCTION.  (a)  A person who violates this chapter and does not cure the violation under Section 552.104 is liable to this state for a civil penalty in an amount of:

(1)  for each violation the court determines to be curable or a breach of a statement submitted to the attorney general under Section 552.104(b)(2), not less than $10,000 and not more than $12,000;

(2)  for each violation the court determines to be uncurable, not less than $80,000 and not more than $200,000; and

(3)  for a continued violation, not less than $2,000 and not more than $40,000 for each day the violation continues.

(b)  The attorney general may bring an action in the name of this state to:

(1)  collect a civil penalty under this section;

(2)  seek injunctive relief against further violation of this chapter; and

(3)  recover attorney’s fees and reasonable court costs or other investigative expenses.

(c)  There is a rebuttable presumption that a person used reasonable care as required under this chapter.

(d)  A defendant in an action under this section may seek an expedited hearing or other process, including a request for declaratory judgment, if the person believes in good faith that the person has not violated this chapter.

(e)  A defendant in an action under this section may not be found liable if:

(1)  another person uses the artificial intelligence system affiliated with the defendant in a manner prohibited by this chapter; or

(2)  the defendant discovers a violation of this chapter through:

(A)  feedback from a developer, deployer, or other person who believes a violation has occurred;

(B)  testing, including adversarial testing or red-team testing;

(C)  following guidelines set by applicable state agencies; or

(D)  if the defendant substantially complies with the most recent version of the “Artificial Intelligence Risk Management Framework: Generative Artificial Intelligence Profile” published by the National Institute of Standards and Technology or another nationally or internationally recognized risk management framework for artificial intelligence systems, an internal review process.

(f)  The attorney general may not bring an action to collect a civil penalty under this section against a person for an artificial intelligence system that has not been deployed.

Sec. 552.106.  ENFORCEMENT ACTIONS BY STATE AGENCIES.  (a)  A state agency may impose sanctions against a person licensed, registered, or certified by that agency for a violation of Subchapter B if:

(1)  the person has been found in violation of this chapter under Section 552.105; and

(2)  the attorney general has recommended additional enforcement by the applicable agency.

(b)  Sanctions under this section may include:

(1)  suspension, probation, or revocation of a license, registration, certificate, or other authorization to engage in an activity; and

(2)  a monetary penalty not to exceed $100,000.

CHAPTER 553.  ARTIFICIAL INTELLIGENCE REGULATORY SANDBOX PROGRAM

SUBCHAPTER A.  GENERAL PROVISIONS

Sec. 553.001.  DEFINITIONS.  In this chapter:

(1)  “Applicable agency” means a department of this state established by law to regulate certain types of business activity in this state and the people engaging in that business, including the issuance of licenses and registrations, that the department determines would regulate a program participant if the person were not operating under this chapter.

(2)  “Department” means the Texas Department of Information Resources.

(3)  “Program” means the regulatory sandbox program established under this chapter that allows a person, without being licensed or registered under the laws of this state, to test an artificial intelligence system for a limited time and on a limited basis.

(4)  “Program participant” means a person whose application to participate in the program is approved and who may test an artificial intelligence system under this chapter.

SUBCHAPTER B.  SANDBOX PROGRAM FRAMEWORK

Sec. 553.051.  ESTABLISHMENT OF SANDBOX PROGRAM.  (a)  The department, in consultation with the council, shall create a regulatory sandbox program that enables a person to obtain legal protection and limited access to the market in this state to test innovative artificial intelligence systems without obtaining a license, registration, or other regulatory authorization.

(b)  The program is designed to:

(1)  promote the safe and innovative use of artificial intelligence systems across various sectors including healthcare, finance, education, and public services;

(2)  encourage responsible deployment of artificial intelligence systems while balancing the need for consumer protection, privacy, and public safety;

(3)  provide clear guidelines for a person who develops an artificial intelligence system to test systems while certain laws and regulations related to the testing are waived or suspended; and

(4)  allow a person to engage in research, training, testing, or other pre-deployment activities to develop an artificial intelligence system.

(c)  The attorney general may not file or pursue charges against a program participant for violation of a law or regulation waived under this chapter that occurs during the testing period.

(d)  A state agency may not file or pursue punitive action against a program participant, including the imposition of a fine or the suspension or revocation of a license, registration, or other authorization, for violation of a law or regulation waived under this chapter that occurs during the testing period.

(e)  Notwithstanding Subsections (c) and (d), the requirements of Subchapter B, Chapter 552, may not be waived, and the attorney general or a state agency may file or pursue charges or action against a program participant who violates that subchapter.

Sec. 553.052.  APPLICATION FOR PROGRAM PARTICIPATION.  (a)  A person must obtain approval from the department and any applicable agency before testing an artificial intelligence system under the program.

(b)  The department by rule shall prescribe the application form.  The form must require the applicant to:

(1)  provide a detailed description of the artificial intelligence system the applicant desires to test in the program, and its intended use;

(2)  include a benefit assessment that addresses potential impacts on consumers, privacy, and public safety;

(3)  describe the applicant’s plan for mitigating any adverse consequences that may occur during the test; and

(4)  provide proof of compliance with any applicable federal artificial intelligence laws and regulations.

Sec. 553.053.  DURATION AND SCOPE OF PARTICIPATION.  (a)  A program participant approved by the department and each applicable agency may test and deploy an artificial intelligence system under the program for a period of not more than 36 months.

(b)  The department may extend a test under this chapter if the department finds good cause for the test to continue.

Sec. 553.054.  EFFICIENT USE OF RESOURCES.  The department shall coordinate the activities under this subchapter and any other law relating to artificial intelligence systems to ensure efficient system implementation and to streamline the use of department resources, including information sharing and personnel.

SUBCHAPTER C.  OVERSIGHT AND COMPLIANCE

Sec. 553.101.  COORDINATION WITH APPLICABLE AGENCY.  (a)  The department shall coordinate with all applicable agencies to oversee the operation of a program participant.

(b)  The council or an applicable agency may recommend to the department that a program participant be removed from the program if the council or applicable agency finds that the program participant’s artificial intelligence system:

(1)  poses an undue risk to public safety or welfare;

(2)  violates any federal law or regulation; or

(3)  violates any state law or regulation not waived under the program.

Sec. 553.102.  PERIODIC REPORT BY PROGRAM PARTICIPANT.  (a)  A program participant shall provide a quarterly report to the department.

(b)  The report shall include:

(1)  metrics for the artificial intelligence system’s performance;

(2)  updates on how the artificial intelligence system mitigates any risks associated with its operation; and

(3)  feedback from consumers and affected stakeholders that are using an artificial intelligence system tested under this chapter.

(c)  The department shall maintain confidentiality regarding the intellectual property, trade secrets, and other sensitive information it obtains through the program.

Sec. 553.103.  ANNUAL REPORT BY DEPARTMENT.  (a)  The department shall submit an annual report to the legislature.

(b)  The report shall include:

(1)  the number of program participants testing an artificial intelligence system in the program;

(2)  the overall performance and impact of artificial intelligence systems tested in the program; and

(3)  recommendations on changes to laws or regulations for future legislative consideration.

CHAPTER 554.  TEXAS ARTIFICIAL INTELLIGENCE COUNCIL

SUBCHAPTER A.  CREATION AND ORGANIZATION OF COUNCIL

Sec. 554.001.  CREATION OF COUNCIL.  (a)  The Texas Artificial Intelligence Council is created to:

(1)  ensure artificial intelligence systems in this state are ethical and developed in the public’s best interest;

(2)  ensure artificial intelligence systems in this state do not harm public safety or undermine individual freedoms by finding issues and making recommendations to the legislature regarding the Penal Code and Chapter 82, Civil Practice and Remedies Code;

(3)  identify existing laws and regulations that impede innovation in the development of artificial intelligence systems and recommend appropriate reforms;

(4)  analyze opportunities to improve the efficiency and effectiveness of state government operations through the use of artificial intelligence systems;

(5)  make recommendations to applicable state agencies regarding the use of artificial intelligence systems to improve the agencies’ efficiency and effectiveness;

(6)  evaluate potential instances of regulatory capture, including undue influence by technology companies or disproportionate burdens on smaller innovators caused by the use of artificial intelligence systems;

(7)  evaluate the influence of technology companies on other companies and determine the existence or use of tools or processes designed to censor competitors or users through the use of artificial intelligence systems;

(8)  offer guidance and recommendations to the legislature on the ethical and legal use of artificial intelligence systems;

(9)  conduct and publish the results of a study on the current regulatory environment for artificial intelligence systems;

(10)  receive reports from the Department of Information Resources regarding the regulatory sandbox program under Chapter 553; and

(11)  make recommendations for improvements to the regulatory sandbox program under Chapter 553.

(b)  The council is administratively attached to the Department of Information Resources, and the department shall provide administrative support to the council as provided by this section.

(c)  The Department of Information Resources and the council shall enter into a memorandum of understanding detailing:

(1)  the administrative support the council requires from the department to fulfill the council’s purposes;

(2)  the reimbursement of administrative expenses to the department; and

(3)  any other provisions necessary to ensure the efficient operation of the council.

Sec. 554.002.  COUNCIL MEMBERSHIP.  (a)  The council is composed of seven members as follows:

(1)  three members of the public appointed by the governor;

(2)  two members of the public appointed by the lieutenant governor; and

(3)  two members of the public appointed by the speaker of the house of representatives.

(b)  Members of the council serve staggered four-year terms, with the terms of three or four members expiring every two years.

(c)  The governor shall appoint a chair from among the members, and the council shall elect a vice chair from its membership.

(d)  The council may establish an advisory board composed of individuals from the public who possess expertise directly related to the council’s functions, including technical, ethical, regulatory, and other relevant areas.

Sec. 554.003.  QUALIFICATIONS.  Members of the council must be Texas residents and have knowledge or expertise in one or more of the following areas:

(1)  artificial intelligence systems;

(2)  data privacy and security;

(3)  ethics in technology or law;

(4)  public policy and regulation;

(5)  risk management related to artificial intelligence systems;

(6)  improving the efficiency and effectiveness of governmental operations; or

(7)  anticompetitive practices and market fairness.

Sec. 554.004.  STAFF AND ADMINISTRATION.  The council may hire an executive director and other personnel as necessary to perform its duties.

SUBCHAPTER B.  POWERS AND DUTIES OF COUNCIL

Sec. 554.101.  ISSUANCE OF REPORTS.  (a)  The council may issue reports to the legislature regarding the use of artificial intelligence systems in this state.

(b)  The council may issue reports on:

(1)  the compliance of artificial intelligence systems in this state with the laws of this state;

(2)  the ethical implications of deploying artificial intelligence systems in this state;

(3)  data privacy and security concerns related to artificial intelligence systems in this state; or

(4)  potential liability or legal risks associated with the use of artificial intelligence systems in this state.

Sec. 554.102.  TRAINING AND EDUCATIONAL OUTREACH.  The council shall conduct training programs for state agencies and local governments on the use of artificial intelligence systems.

Sec. 554.103.  LIMITATION OF AUTHORITY.  The council may not:

(1)  adopt rules or promulgate guidance that is binding for any entity;

(2)  interfere with or override the operation of a state agency; or

(3)  perform a duty or exercise a power not granted by this chapter.

SECTION 5.  Section 325.011, Government Code, is amended to read as follows:

Sec. 325.011.  CRITERIA FOR REVIEW.  The commission and its staff shall consider the following criteria in determining whether a public need exists for the continuation of a state agency or its advisory committees or for the performance of the functions of the agency or its advisory committees:

(1)  the efficiency and effectiveness with which the agency or the advisory committee operates;

(2)(A)  an identification of the mission, goals, and objectives intended for the agency or advisory committee and of the problem or need that the agency or advisory committee was intended to address; and

(B)  the extent to which the mission, goals, and objectives have been achieved and the problem or need has been addressed;

(3)(A)  an identification of any activities of the agency in addition to those granted by statute and of the authority for those activities; and

(B)  the extent to which those activities are needed;

(4)  an assessment of authority of the agency relating to fees, inspections, enforcement, and penalties;

(5)  whether less restrictive or alternative methods of performing any function that the agency performs could adequately protect or provide service to the public;

(6)  the extent to which the jurisdiction of the agency and the programs administered by the agency overlap or duplicate those of other agencies, the extent to which the agency coordinates with those agencies, and the extent to which the programs administered by the agency can be consolidated with the programs of other state agencies;

(7)  the promptness and effectiveness with which the agency addresses complaints concerning entities or other persons affected by the agency, including an assessment of the agency’s administrative hearings process;

(8)  an assessment of the agency’s rulemaking process and the extent to which the agency has encouraged participation by the public in making its rules and decisions and the extent to which the public participation has resulted in rules that benefit the public;

(9)  the extent to which the agency has complied with:

(A)  federal and state laws and applicable rules regarding equality of employment opportunity and the rights and privacy of individuals; and

(B)  state law and applicable rules of any state agency regarding purchasing guidelines and programs for historically underutilized businesses;

(10)  the extent to which the agency issues and enforces rules relating to potential conflicts of interest of its employees;

(11)  the extent to which the agency complies with Chapters 551 and 552 and follows records management practices that enable the agency to respond efficiently to requests for public information;

(12)  the effect of federal intervention or loss of federal funds if the agency is abolished;

(13)  the extent to which the purpose and effectiveness of reporting requirements imposed on the agency justifies the continuation of the requirement; [and]

(14)  an assessment of the agency’s cybersecurity practices using confidential information available from the Department of Information Resources or any other appropriate state agency; and

(15)  an assessment of the agency’s use of artificial intelligence systems, as that term is defined by Section 551.001, Business & Commerce Code, in its operations and its oversight of the use of artificial intelligence systems by persons under the agency’s jurisdiction, and any related impact on the agency’s ability to achieve its mission, goals, and objectives, made using information available from the Department of Information Resources, the attorney general, or any other appropriate state agency.

SECTION 6.  Section 2054.068(b), Government Code, is amended to read as follows:

(b)  The department shall collect from each state agency information on the status and condition of the agency’s information technology infrastructure, including information regarding:

(1)  the agency’s information security program;

(2)  an inventory of the agency’s servers, mainframes, cloud services, and other information technology equipment;

(3)  identification of vendors that operate and manage the agency’s information technology infrastructure; [and]

(4)  any additional related information requested by the department; and

(5)  an evaluation of the use or considered use of artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, by each state agency.

SECTION 7.  Section 2054.0965(b), Government Code, is amended to read as follows:

(b)  Except as otherwise modified by rules adopted by the department, the review must include:

(1)  an inventory of the agency’s major information systems, as defined by Section 2054.008, and other operational or logistical components related to deployment of information resources as prescribed by the department;

(2)  an inventory of the agency’s major databases, artificial intelligence systems, as defined by Section 551.001, Business & Commerce Code, and applications;

(3)  a description of the agency’s existing and planned telecommunications network configuration;

(4)  an analysis of how information systems, components, databases, applications, and other information resources have been deployed by the agency in support of:

(A)  applicable achievement goals established under Section 2056.006 and the state strategic plan adopted under Section 2056.009;

(B)  the state strategic plan for information resources; and

(C)  the agency’s business objectives, mission, and goals;

(5)  agency information necessary to support the state goals for interoperability and reuse; and

(6)  confirmation by the agency of compliance with state statutes, rules, and standards relating to information resources.

SECTION 8.  Not later than September 1, 2026, the attorney general shall post on the attorney general’s Internet website the information and online mechanism required by Section 552.102, Business & Commerce Code, as added by this Act.

SECTION 9.  (a)  Notwithstanding any other section of this Act, in a state fiscal year, a state agency to which this Act applies is not required to implement a provision found in another section of this Act that is drafted as a mandatory provision imposing a duty on the agency to take an action unless money is specifically appropriated to the agency for that fiscal year to carry out that duty.  The agency may implement the provision in that fiscal year to the extent other funding is available to the agency to do so.

(b)  If, as authorized by Subsection (a) of this section, the state agency does not implement the mandatory provision in a state fiscal year, the state agency, in its legislative budget request for the next state fiscal biennium, shall certify that fact to the Legislative Budget Board and include a written estimate of the costs of implementing the provision in each year of that next state fiscal biennium.

SECTION 10.  This Act takes effect January 1, 2026.

    President of the Senate           Speaker of the House      

I certify that H.B. No. 149 was passed by the House on April 23, 2025, by the following vote:  Yeas 146, Nays 3, 1 present, not voting; and that the House concurred in Senate amendments to H.B. No. 149 on May 30, 2025, by the following vote:  Yeas 121, Nays 17, 2 present, not voting.

______________________________

Chief Clerk of the House   

I certify that H.B. No. 149 was passed by the Senate, with amendments, on May 23, 2025, by the following vote:  Yeas 31, Nays 0.

______________________________

Secretary of the Senate   

APPROVED: __________________

                 Date       

          __________________

               Governor       


Appendix 2 — Model Ordinance: Responsible Use of Artificial Intelligence in City Operations

ORDINANCE NO. ______

AN ORDINANCE

relating to the responsible use of artificial intelligence systems by the City; establishing transparency, accountability, and oversight requirements; and providing for implementation and administration.

WHEREAS,

the City recognizes that artificial intelligence (“AI”) systems are increasingly used to improve operational efficiency, service delivery, data analysis, and internal workflows; and

WHEREAS,

the City further recognizes that certain uses of AI may influence decisions affecting residents, employees, vendors, or regulated parties and therefore require appropriate oversight; and

WHEREAS,

the City seeks to encourage responsible innovation while preserving public trust, transparency, and accountability; and

WHEREAS,

the Texas Legislature has enacted the Texas Responsible Artificial Intelligence Governance Act, effective January 1, 2026, establishing statewide standards for AI use by government entities; and

WHEREAS,

the City recognizes that the adoption of artificial intelligence tools may, over time, change how work is performed and how staffing needs are structured, and that any such impacts are expected to occur gradually through attrition, reassignment, or role redesign rather than immediate workforce reductions;

NOW, THEREFORE, BE IT ORDAINED BY THE CITY COUNCIL OF THE CITY OF __________, TEXAS:

Section 1. Definitions

For purposes of this Ordinance:

  1. “Artificial Intelligence System” means a computational system that uses machine learning, statistical modeling, or related techniques to perform tasks normally associated with human intelligence, including analysis, prediction, classification, content generation, or prioritization.
  2. “Decision-Adjacent AI” means an AI system that materially influences, prioritizes, or recommends outcomes related to enforcement, eligibility, allocation of resources, personnel actions, procurement decisions, or public services, even if final decisions are made by a human.
  3. “High-Risk AI Use” means deployment of an AI system that directly or indirectly affects individual rights, access to services, enforcement actions, or legally protected interests.
  4. “Department” means any City department, office, division, or agency.

Section 2. Permitted Use of Artificial Intelligence

(a) Internal Productivity Uses. Departments may deploy AI systems for internal productivity and analytical purposes, including but not limited to:

  • Drafting and summarization of documents
  • Data analysis and forecasting
  • Workflow automation
  • Research and internal reporting
  • Customer-service chat tools providing general information (with disclaimers as appropriate)

Such uses shall not require prior Council approval but shall be subject to internal documentation requirements.

(b) Decision-Adjacent Uses. AI systems that influence or support decisions affecting residents, employees, vendors, or regulated entities may be deployed only in accordance with Sections 3 and 4 of this Ordinance.

Section 3. Prohibited Uses

No Department shall deploy or use an AI system that:

  1. Performs social scoring of individuals or groups based on behavior, personal traits, or reputation for the purpose of denying services, benefits, or rights;
  2. Intentionally discriminates against a protected class in violation of state or federal law;
  3. Generates or deploys biometric identification or surveillance in violation of constitutional protections;
  4. Produces or facilitates unlawful deep-fake or deceptive content;
  5. Operates as a fully automated decision-making system without meaningful human review in matters affecting legal rights or obligations.

Section 4. Oversight and Approval for High-Risk AI Uses

(a) Inventory Requirement. The City Manager shall maintain a centralized AI Systems Inventory identifying:

  • Each AI system in use
  • The Department deploying the system
  • The system’s purpose
  • Whether the use is classified as high-risk

(b) Approval Process. Prior to deployment of any High-Risk AI Use, the Department must:

  1. Submit a written justification describing the system’s purpose and scope;
  2. Identify the data sources used by the system;
  3. Describe human oversight mechanisms;
  4. Obtain approval from:
    • The City Manager (or designee), and
    • The City Attorney for legal compliance review.

(c) Human Accountability. Each AI system shall have a designated human owner responsible for:

  • Monitoring performance
  • Responding to errors or complaints
  • Suspending use if risks are identified

Section 5. Transparency and Public Disclosure

(a) Disclosure to the Public. When a City AI system interacts directly with residents, the City shall provide clear notice that the interaction involves AI.

(b) Public Reporting. The City shall publish annually:

  • A summary of AI systems in use
  • The general purposes of high-risk AI systems
  • Contact information for public inquiries

No proprietary or security-sensitive information shall be disclosed.

Section 6. Procurement and Vendor Requirements

All City contracts involving AI systems shall, where applicable:

  1. Require disclosure of AI functions;
  2. Prohibit undisclosed algorithmic decision-making;
  3. Allow the City to audit or review AI system outputs relevant to City operations;
  4. Require vendors to notify the City of material changes to AI functionality.

Section 7. Review and Sunset

(a) Periodic Review. High-risk AI systems shall be reviewed at least annually to assess:

  • Accuracy
  • Bias
  • Continued necessity
  • Compliance with this Ordinance

(b) Sunset Authority. The City Manager may suspend or terminate use of any AI system that poses unacceptable risk or fails compliance review.

Section 8. Training

The City shall provide appropriate training to employees involved in:

  • Deploying AI systems
  • Supervising AI-assisted workflows
  • Interpreting AI-generated outputs

Section 9. Severability

If any provision of this Ordinance is held invalid, such invalidity shall not affect the remaining provisions.

Section 10. Effective Date

This Ordinance shall take effect immediately upon adoption.


Appendix 3 — City Manager Administrative Regulation: Responsible Use of Artificial Intelligence

ADMINISTRATIVE REGULATION NO. ___

Subject: Responsible Use of Artificial Intelligence (AI) in City Operations
Authority: Ordinance No. ___ (Responsible Use of Artificial Intelligence)
Issued by: City Manager
Effective Date: __________

1. Purpose

This Administrative Regulation establishes operational procedures for the responsible deployment, oversight, and monitoring of artificial intelligence (AI) systems used by the City, consistent with adopted Council policy and applicable state law.

The intent is to:

  • Enable rapid adoption of AI for productivity and service delivery;
  • Ensure transparency and accountability for higher-risk uses; and
  • Protect the City, employees, and residents from unintended consequences.

2. Scope

This regulation applies to all City departments, offices, and divisions that:

  • Develop, procure, deploy, or use AI systems; or
  • Rely on vendor-provided software that includes AI functionality.

3. AI System Classification

Departments shall classify AI systems into one of the following categories:

A. Tier 1 — Internal Productivity AI

Examples:

  • Document drafting and summarization
  • Data analysis and forecasting
  • Internal research and reporting
  • Workflow automation

Oversight Level:

  • Department-level approval
  • Registration in AI Inventory

B. Tier 2 — Decision-Adjacent AI

Examples:

  • Permit or inspection prioritization
  • Vendor or application risk scoring
  • Resource allocation recommendations
  • Enforcement or compliance triage

Oversight Level:

  • City Manager approval
  • Legal review
  • Annual performance review

C. Tier 3 — High-Risk AI

Examples:

  • AI influencing enforcement actions
  • Eligibility determinations
  • Public safety analytics
  • Biometric or surveillance tools

Oversight Level:

  • City Manager approval
  • City Attorney review
  • Documented human-in-the-loop controls
  • Annual audit and Council notification

4. AI Systems Inventory

The City Manager’s Office shall maintain a centralized AI Systems Inventory, which includes:

  • System name and vendor
  • Department owner
  • Purpose and classification tier
  • Date of deployment
  • Oversight requirements

Departments shall update the inventory prior to deploying any new AI system.

5. Approval Process

A. Tier 1 Systems

  • Approved by Department Director
  • Registered in inventory

B. Tier 2 and Tier 3 Systems

Departments must submit:

  1. A description of the system and intended use
  2. Data sources and inputs
  3. Description of human oversight
  4. Risk mitigation measures

Approval required from:

  • City Manager (or designee)
  • City Attorney (for legal compliance)

6. Human Oversight & Accountability

Each AI system shall have a designated System Owner responsible for:

  • Monitoring system outputs
  • Responding to errors or complaints
  • Suspending use if risks emerge
  • Coordinating audits or reviews

No AI system may operate as a fully autonomous decision-maker for actions affecting legal rights or obligations.

7. Vendor & Procurement Controls

Procurement involving AI systems shall:

  • Identify AI functionality explicitly in solicitations
  • Require vendors to disclose material AI updates
  • Prohibit undisclosed algorithmic decision-making
  • Preserve City audit and review rights

8. Monitoring, Review & Sunset

  • Tier 2 and Tier 3 systems shall undergo annual review.
  • Systems may be suspended or sunset if:
    • Accuracy degrades
    • Bias is identified
    • Legal risk increases
    • The system no longer serves a defined purpose

9. Training

Departments deploying AI shall ensure appropriate staff training covering:

  • Proper interpretation of AI outputs
  • Limitations of AI systems
  • Escalation and error-handling procedures

10. Reporting to Council

The City Manager shall provide Council with:

  • An annual summary of AI systems in use
  • Identification of Tier 3 (High-Risk) systems
  • Any material incidents or corrective actions

11. Effective Date

This Administrative Regulation is effective immediately upon issuance.

12. Workforce Considerations

The use of artificial intelligence systems may change job functions and workflows over time. Departments shall:

  • Use AI to augment employee capabilities wherever possible;
  • Prioritize retraining, reassignment, and natural attrition when workflows change;
  • Coordinate with Human Resources before deploying AI systems that materially alter job duties; and
  • Recognize that long-term staffing impacts, if any, remain subject to City Manager and City Council authority.

Appendix 4 — Public-Facing FAQ: Responsible Use of Artificial Intelligence in City Operations

What is this ordinance about?

This ordinance establishes clear rules for how the City may use artificial intelligence (AI) tools. It allows the City to use modern technology to improve efficiency and service delivery while ensuring that higher-risk uses are transparent, accountable, and overseen by people.

Is the City already using artificial intelligence?

Yes. Like most modern organizations, the City already uses limited AI-enabled tools for tasks such as document drafting, data analysis, customer service support, and vendor-provided software systems.

This ordinance ensures those tools are used consistently and responsibly.

Is this ordinance banning artificial intelligence?

No.
The ordinance does not ban AI. It encourages responsible adoption of AI for productivity and internal efficiency while placing guardrails on uses that could affect people’s rights or access to services.

Why is the City adopting rules now?

AI tools are becoming more common and more capable. Clear rules help ensure:

  • Transparency in how AI is used
  • Accountability for outcomes
  • Compliance with new Texas law
  • Public trust in City operations

The Texas Legislature recently enacted statewide standards for AI use by government entities, and this ordinance aligns the City with those expectations.

Will artificial intelligence affect City jobs?

AI may change how work is done over time, just as previous technologies have.

This ordinance does not authorize immediate workforce reductions. Any long-term impacts are expected to occur gradually and, where possible, through:

  • Natural attrition
  • Reassignment
  • Retraining
  • Changes in job duties

Final staffing decisions remain with City leadership and City Council.

Will AI replace City employees?

AI tools are intended to assist employees, not replace human judgment. For higher-risk uses, the ordinance requires meaningful human oversight and accountability.

Can AI make decisions about me automatically?

No.
The ordinance prohibits fully automated decision-making that affects legal rights, enforcement actions, or access to services without human review.

AI may provide information or recommendations, but people remain responsible for decisions.

Will the City use AI for surveillance or facial recognition?

The ordinance prohibits AI uses that violate constitutional protections, including improper biometric surveillance.

Any use of biometric or surveillance-related AI would require strict legal review and compliance with state and federal law.

How will I know if I’m interacting with AI?

If the City uses AI systems that interact directly with residents, the City must clearly disclose that you are interacting with an AI system.

Does this apply to police or public safety?

Yes.
AI tools used in public safety contexts are considered higher-risk and require additional review, approval, and oversight. AI systems may not independently make enforcement decisions.

Who is responsible if an AI system makes a mistake?

Each AI system has a designated City employee responsible for monitoring its use, addressing errors, and suspending the system if necessary.

Responsibility remains with the City—not the software.

Will the public be able to see how AI is used?

Yes.
The City will publish an annual summary describing:

  • The types of AI systems in use
  • Their general purpose
  • How residents can ask questions or raise concerns

Sensitive or proprietary information will not be disclosed.

Does this create a new board or bureaucracy?

No.
Oversight is handled through existing City leadership and administrative structures.

Is there a cost to adopting this ordinance?

There is no direct cost associated with adoption. Over time, responsible AI use may help control costs by improving productivity and efficiency.

How often will this policy be reviewed?

Higher-risk AI systems are reviewed annually. The ordinance itself may be updated as technology and law evolve.

Who can I contact with questions or concerns?

Residents may contact the City Manager’s Office or submit inquiries through the City’s website. Information on AI use and reporting channels will be publicly available.

Bottom Line

This ordinance ensures the City:

  • Uses modern tools responsibly
  • Maintains human accountability
  • Protects public trust
  • Aligns with Texas law
  • Adapts thoughtfully to technological change

The Municipal & Business Workquake of 2026: Why Cities Must Redesign Roles Now—Before Attrition Does It for Them

A collaboration between Lewis McLain & AI

Cities are about to experience an administrative shift that will look nothing like a “tech revolution” and nothing like a classic workforce reduction. It will arrive as a workquake: a sudden drop in the labor required to complete routine tasks across multiple departments, driven by AI systems that can ingest documents, apply rules, assemble outputs, and draft narratives at scale.

The danger is not that cities will replace everyone with software. The danger is more subtle and far more likely: cities will allow AI to hollow out core functions unintentionally, through non-replacement hiring, scattered tool adoption, and informal workflow shortcuts—until the organization’s accountability structure no longer matches the work being done.

In 2026, the right posture is not fascination or fear. It is proactive redesign.


I. The Real Change: Task Takeover, Not Job Replacement

Municipal roles often look “human” because they involve public trust, compliance, and service. But much of the day-to-day work inside those roles is structured:

  • collecting inputs
  • applying policy checklists
  • preparing standardized packets
  • producing routine reports
  • tracking deadlines
  • drafting summaries
  • reconciling variances
  • adding narrative to numbers

Those tasks are precisely what modern AI systems now handle with speed and consistency. What remains human is still vital—but it is narrower: judgment, discretion, ethics, and accountability.

That creates the same pattern across departments:

  • the production layer shrinks rapidly
  • the review and exception layer becomes the job

Cities that don’t define this shift early will experience it late—as a staffing and governance crisis.


II. Example- City Secretary: Where Governance Work Becomes Automated

The city secretary function sits at the center of formal governance: agendas, minutes, public notices, records, ordinances, and elections. Much of the labor in this area is procedural and document-heavy.

Tasks likely to be absorbed quickly

  • Agenda assembly from departmental submissions
  • Packet compilation and formatting
  • Deadline tracking for posting and notices
  • Records indexing and retrieval
  • Draft minutes from audio/video with time stamps
  • Ordinance/resolution histories and cross-references

What shrinks

  • clerical assembly roles
  • manual transcription
  • routine records handling

What becomes more important

  • legal compliance judgment (Open Meetings, Public Information)
  • defensibility of the record
  • election integrity protocols
  • final human review of public-facing outputs

In other words: the city secretary role does not disappear. It becomes governance QA—with higher stakes and fewer support layers.


III. Example – Purchasing & Procurement: Where Process Becomes Automated Screening

Purchasing has always been a mix of routine compliance and high-risk discretion. AI hits the routine side first, fast.

Tasks likely to be absorbed quickly

  • quote comparisons and bid tabulations
  • price benchmarking against history and peers
  • contract template population
  • insurance/required-doc compliance checks
  • renewal tracking and vendor performance summaries
  • anomaly detection (odd pricing, split purchases, policy exceptions)

What shrinks

  • bid tabulators
  • quote chasers
  • contract formatting staff
  • clerical procurement roles

What becomes more important

  • vendor disputes and negotiations
  • integrity controls (conflicts, favoritism risk)
  • exception approvals with documented reasoning
  • strategic sourcing decisions

Procurement shifts from “processing” to risk-managed decisioning.


IV. Example – Budget Analysts: Where “Analysis” Separates from “Assembly”

Budget offices are often mistaken as purely analytical. In reality, a large share of work is assembly: gathering departmental submissions, normalizing formats, building tables, writing routine narratives, and explaining variances.

Tasks likely to be absorbed quickly

  • ingestion and normalization of department requests
  • enforcement of submission rules and formatting
  • auto-generated variance explanations
  • draft budget narratives (department summaries, highlights)
  • scenario tables (base, constrained, growth cases)
  • continuous budget-to-actual reconciliation

What shrinks

  • entry-level budget analysts
  • table builders and narrative drafters
  • budget book production labor

What becomes more important

  • setting assumptions and policy levers
  • framing tradeoffs for leadership and council
  • long-range fiscal forecasting judgment
  • telling the truth clearly under political pressure

Budget staff shift from spreadsheet production to decision support and persuasion with integrity.


V. Example – Police & Fire Data Analysts: Where Reporting Becomes Real-Time Patterning

Public safety analytics is one of the most automatable municipal domains because it is data-rich, structured, and continuous. The “report builder” role is especially vulnerable.

Tasks likely to be absorbed quickly

  • automated monthly/quarterly performance reporting
  • response-time distribution analysis
  • hotspot mapping and geospatial summaries
  • staffing demand pattern detection
  • anomaly flagging (unusual patterns in calls, activity, response)
  • draft CompStat-style narratives and slide-ready briefings

What shrinks

  • manual report builders
  • map producers
  • dashboard-only roles
  • grant-report drafters relying on routine metrics

What becomes more important

  • human interpretation (what the pattern means operationally)
  • explaining limitations and avoiding false certainty
  • bias and fairness oversight
  • defensible analytics for court, public inquiry, or media scrutiny

Public safety analytics becomes less about producing charts and more about protecting truth and trust.


VI. Example – More Roles Next in Line

Permitting & Development Review

AI can quickly absorb:

  • completeness checks
  • code cross-referencing
  • workflow routing and status updates
  • templated staff reports

Humans remain essential for:

  • discretionary judgments
  • negotiation with applicants
  • interpreting ambiguous code situations
  • public-facing case management

HR Analysts

AI absorbs:

  • classification comparisons
  • market surveys and comp modeling
  • policy drafting and FAQ support

Humans remain for:

  • discipline, negotiations, sensitive cases
  • equity judgments and culture
  • leadership counsel and conflict resolution

Grants Management

AI absorbs:

  • opportunity scanning and matching
  • compliance calendars
  • draft narrative sections and attachments lists

Humans remain for:

  • strategy (which grants matter)
  • partnerships and commitments
  • risk management and audit defense

VII. The Practical Reality in Cities: Attrition Is the Mechanism

This won’t arrive as dramatic layoffs. It will arrive as:

  • hiring freezes
  • “we won’t backfill that position”
  • consolidation of roles
  • sudden expectations that one person can do what three used to do

If cities do nothing, AI will still be adopted—piecemeal, unevenly, and without governance redesign. That produces an organization with:

  • fewer people
  • unclear accountability
  • heavier compliance risk
  • fragile institutional memory

VIII. What “Proactive” Looks Like in 2026

Cities need to act immediately in four practical ways:

  1. Define what must remain human
    • elections integrity
    • public record defensibility
    • procurement exceptions and ethics
    • budget assumption-setting and council framing
    • public safety interpretation and bias oversight
  2. Separate production from review
    • let AI assemble
    • require humans to verify, approve, and own
  3. Rewrite job descriptions now
    • stop hiring for assembly work
    • hire for judgment, auditing, communication, and governance
  4. Build the governance layer
    • standards for AI outputs
    • audit trails
    • transparency policies
    • escalation rules
    • periodic review of AI-driven decisions

This is not an IT upgrade. It’s a redesign of how public authority is exercised.


Conclusion: The Choice Cities Face

Cities will adopt AI regardless—because the savings and speed will be undeniable. The only choice is whether the city adopts AI intentionally or accidentally.

If adopted intentionally, AI becomes:

  • a productivity tool
  • a compliance enhancer
  • a service accelerator

If adopted accidentally, AI becomes:

  • a quiet hollowing of institutional capacity
  • a transfer of control from policy to tool
  • and eventually a governance failure that will be blamed on people who never had the chance to redesign the system

2026 is early enough to steer the transition.
Waiting will not preserve the old model. It will only ensure the new one arrives without a plan.

End note: I usually spend a couple of days (minimum) completing the compilation of all my bank and credit card records, assigning a classification, summarizing and giving my CPA a complete set of documents. I uploaded the documents to AI, gave it instructions to prepare the package, answering a list of questions regarding reconciliation and classification issues. Two hours later, I had the full package with comparisons to past years from the returns I also uploaded. I was 100% ready on New Year’s Eve just waiting for the 1099’s to be sent to me by the end of January. Meanwhile, I have been having AI enhance and create a comprehensive accounting system with beautiful schedules like cash flow, taxation notes, checklists with new IRS rules and general help – more than I was getting from CPA. I’ll be able to actually take over the CPA duties. It’s just the start of the things I can turn over to AI while I become the editor and reviewer instead of the dreaded grunt work. LFM

Cross-Strait Tensions in the Taiwan Strait

Cross-Strait Tensions in the Taiwan Strait

A collaboration between Lewis McLain & AI

Understanding the Headlines — and the History Behind Them

(As of December 31, 2025)

https://cdn.britannica.com/32/245432-050-C0AC0B3A/Locator-map-Taiwan-Strait.jpg
https://pbs.twimg.com/media/FZaS5UNaQAARbcl.jpg
https://ichef.bbci.co.uk/news/480/cpsprodpb/24F6/production/_132226490_china_taiwan_v05_first_island_chain_2x640-nc.png.webp

The Taiwan Strait—a body of water only about 180 kilometers wide separating Taiwan from China—has become one of the most dangerous fault lines in global politics. As of the final days of 2025, tensions are not merely elevated; they are tightly wound, compressed by military modernization, economic interdependence, and unresolved history.

For many observers, the story appears in fragments: arms packages, military drills, sharp diplomatic language. What follows connects those fragments into a coherent narrative.


Geography and the Strategic Chokepoint

Taiwan’s location places it at the center of the First Island Chain—a strategic arc stretching from northern Japan through Taiwan and into the Philippines. This chain functions as a natural barrier limiting access from the Asian mainland to the open Pacific. Control of Taiwan would fundamentally alter naval power projection in East Asia.

This geography explains why the Taiwan Strait is never “quiet.” It is narrow, crowded, and increasingly contested. The informal median line that once helped reduce risk has largely disappeared as a meaningful restraint.


The Backstory That Never Ended

https://64.media.tumblr.com/6547e6ebe056b21bed8cb13fb6c988cf/tumblr_ntbf5m34Yn1rasnq9o1_1280.jpg
https://upload.wikimedia.org/wikipedia/commons/5/59/Movement_KMTretreat.svg
https://upload.wikimedia.org/wikipedia/commons/8/85/Mao_Proclaiming_New_China.JPG

The roots of the current crisis trace back to 1949. After losing the Chinese Civil War, the Kuomintang retreated to Taiwan, while the People’s Republic of China was founded on the mainland.

From Beijing’s perspective, Taiwan is not a foreign country but the last unresolved piece of a civil war. From Taiwan’s perspective, decades of separate political development—culminating in a robust democracy—have produced a distinct identity and lived reality. These competing narratives coexist uneasily, and neither side believes time alone will resolve the dispute.


Why the United States Is Involved — Carefully

https://thenewglobalorder.com/wp-content/uploads/2021/07/https3A2F2Fs3-ap-northeast-1.amazonaws.com2Fpsh-ex-ftnikkei-3937bb42Fimages2F12F42F12F12F16741141-1-eng-GB2F20181120-Taiwan-conflict.png.jpeg
https://upload.wikimedia.org/wikipedia/commons/6/6b/Taiwan_Strait.png

U.S. involvement is governed by the Taiwan Relations Act, which commits Washington to helping Taiwan maintain the capacity to defend itself. Importantly, U.S. policy relies on strategic ambiguity—deliberately avoiding a clear promise or denial of military intervention.

This ambiguity has preserved peace for decades by discouraging both Chinese aggression and unilateral Taiwanese declarations of independence. However, ambiguity works best when power balances change slowly. That condition no longer holds.


The Trigger: U.S. Military Aid in December 2025

https://www.reuters.com/resizer/v2/2RKW6S33PROONHWXO7W7TG7EUQ.jpg?auth=a0523ed0fb5e1875fafd538866db65fb28080b6d95285c6151b5cbd85ee315fd&height=1200&quality=80&smart=true&width=1200
https://defence-blog.com/wp-content/uploads/2025/12/GzaHUC4a4AAIS49.jpg
https://upload.wikimedia.org/wikipedia/commons/5/57/Launcher_of_Chien_hsiang_loitering_munition.jpg

On December 17–18, 2025, the United States approved an approximately $11.1 billion military assistance package for Taiwan. This package was significant not only for its size, but for its strategic intent.

Rather than emphasizing prestige platforms, the aid focused on asymmetric defense systems—HIMARS, advanced artillery, loitering munitions, and anti-armor weapons. These systems are designed to complicate invasion planning, disrupt amphibious landings, and increase uncertainty for any attacker.

The signal was clear: the United States intends to strengthen deterrence by denial—making success less likely, not merely punishment more severe.


China’s Immediate Response: Overt Military Drills

https://news.cgtn.com/news/2022-08-06/Chart-of-the-Day-Why-PLA-s-six-drills-areas-around-Taiwan-matter-1chdv5ynjMc/img/21dc2e4e70794017866fe13e8cdf5e70/21dc2e4e70794017866fe13e8cdf5e70.png
https://www.visualcapitalist.com/wp-content/uploads/2025/12/China_Military_Drills_Around_Taiwan_SITE.webp
https://eng.mod.gov.cn/xb/News_213114/TopStories/_attachment/2022/08/04/4917338_1c697a55663b2455f0c40c.jpg

Within days, China responded with large-scale military exercises under Justice Mission 2025, conducted during the final two days of December. These were not symbolic maneuvers.

The drills included simulated blockades, live-fire missile activity, coordinated naval and air encirclement, and electronic and cyber operations. Crucially, they ignored the historical median line, reinforcing Beijing’s assertion that no boundary exists.

For China, the drills served three audiences at once:

  • Domestic: demonstrating resolve and control
  • Military: testing readiness under realistic conditions
  • International: signaling that U.S. aid will be met with immediate pressure

The speed of response mattered as much as the scale. Where past reactions unfolded over weeks, this response arrived in days—illustrating how compressed the strategic environment has become.


Timeline: How the Escalation Unfolded

https://chinapower.csis.org/?attachment_id=10673
https://2021-2025.state.gov/wp-content/uploads/2023/09/FMS-2023-Graphic.jpg
  • Mid-December 2025 – U.S. approves major asymmetric defense aid to Taiwan
  • Days later – China announces and executes large-scale encirclement exercises
  • End of December – Normalization of high-intensity operations around Taiwan

This is not a crisis spike followed by calm. It is a ratcheting process.


Why This Moment Is Different

Three changes distinguish today’s environment from earlier decades:

  1. Military capability gaps have narrowed — China can now execute complex, multi-domain operations
  2. Economic stakes are global — Taiwan’s semiconductor dominance affects every advanced economy
  3. Reaction time has collapsed — signaling and counter-signaling now occur almost instantly

These factors reduce the margin for error. Whenever magnitude and velocity step up exponentially, watch out!


What Happens Next? Plausible Paths Forward (6–12 Months)

https://csis-website-prod.s3.amazonaws.com/s3fs-public/2024-06/240605_Lin_ChinaQuarantineTaiwan_Fig7_0.png?VersionId=xD7gUdmVW4zZT2yrU.fQLCYscFBHFSDm
https://www.swp-berlin.org/publications/assets/Research_Paper/2022RP11/images/2022RP11_Security_Indo-Pacific_001.png

Scenario 1: Sustained Pressure (Most Likely)
China continues frequent drills, air incursions, and maritime pressure short of war. Taiwan fortifies defenses. The U.S. deepens coordination with allies. Tension remains constant but controlled.

Scenario 2: Crisis Trigger (Moderate Risk)
An accident, miscalculation, or political shock produces a short-term crisis—missile tests, blockade rehearsal, or sharp escalation—followed by emergency diplomacy.

Scenario 3: De-Escalation Window (Least Likely)
Back-channel diplomacy temporarily reduces activity, but core disagreements remain unresolved.


Reading the News With Clearer Eyes

For those following headlines, the danger is assuming each event stands alone. In reality, the Taiwan Strait operates as a continuous feedback loop—every move invites a response, every response reshapes expectations.

The risk lies not in any single arms package or military drill, but in the cumulative narrowing of choices. When history, identity, military power, and global economics converge in a narrow stretch of water, stability depends not on goodwill—but on restraint, clarity, and time. All three are increasingly scarce.

After the Fireworks: What the First Morning of the Year Is For

A collaboration between Lewis McLain & AI

Midnight gets the attention, but morning gets the truth.

The fireworks fade quickly. The music stops. Streets empty. Festive hats are cleared away. By the time the sun rises on the first day of the year, the world has grown quiet again—almost unchanged. The calendar has turned, but the room still looks the same. The problems did not disappear overnight. Neither did the blessings.

That quiet is not a letdown. It is the point.

For thousands of years, humanity has gathered at midnight to mark the turning of time. But it has always been the morning after that determines whether anything truly changes. Midnight is ceremonial. Morning is operational.


Why Midnight Can’t Carry the Weight We Give It

We ask too much of midnight.

We expect clarity, resolve, closure, and renewal to arrive in a single moment. We compress an entire year’s worth of meaning into a countdown and a cheer. When it fails to deliver transformation, we feel either disappointed or embarrassed by our own expectations.

But midnight was never meant to do the work of renewal. It only marks the handoff.

Even in ancient cultures, the celebration was followed by days of ritual reordering—debts repaid, vows honored, fields prepared, households reset. Renewal was not instantaneous; it was deliberate.

The modern world kept the celebration and lost the follow-through.


The First Morning Is Honest in a Way Midnight Is Not

Morning has no soundtrack. No audience. No spectacle.

The first morning of the year confronts us with continuity:

  • The same body
  • The same relationships
  • The same responsibilities
  • The same unfinished work

And that is precisely why it matters.

Real change does not arrive in dramatic gestures. It arrives in quiet decisions made when no one is counting down, applauding, or watching. Morning exposes whether we were serious—or merely hopeful.


What the First Morning Asks of Us

The first morning of the year asks better questions than midnight ever could.

Not What do you promise?
But What will you tend?

Not What will you fix all at once?
But What will you stop ignoring?

Not Who do you want to become?
But Who will you show up as today?

These questions do not demand ambition. They demand honesty.


Why Small Faithfulness Outlasts Grand Resolution

Resolutions fail not because they aim too high, but because they assume momentum will carry them. Morning teaches a different lesson: momentum fades; habits remain.

Civilizations, institutions, and people rarely collapse because of one bad decision. They erode because of deferred maintenance—small things left unattended because they were inconvenient, invisible, or uncomfortable.

The same is true personally. Health declines quietly. Relationships drift slowly. Faith thins gradually. None of it announces itself with fireworks.

Morning is where maintenance happens. It is time to restore, to recommit, to renew, to recount the blessings!


The Courage of Ordinary Beginnings

There is a particular courage in beginning again without drama.

It looks like:

  • Returning a call that should have been made months ago
  • Scheduling an appointment long avoided
  • Reopening a conversation gently rather than triumphantly
  • Continuing a responsibility without announcing it as a “new start”

This is not inspirational courage. It is durable courage.

The kind that survives February.


A Word About Gratitude

The first morning of the year is also where gratitude regains its balance.

Gratitude at midnight often feels forced—too broad, too general. Morning gratitude is specific. It notices:

  • What endured
  • What was preserved
  • What did not break, even when it could have

Gratitude without denial is one of the most stabilizing forces a person—or a society—can cultivate.


Why This Matters Beyond the Personal

What is true for individuals is true for communities.

Cities do not renew themselves at ribbon cuttings. Institutions do not regain trust through slogans. Systems do not become safer because a report was filed or a year closed.

Improvement happens in the quiet work that follows acknowledgment:

  • Maintenance after inspection
  • Correction after recognition
  • Stewardship after celebration

Morning is where accountability lives.


The Gift of the First Morning

The first morning of the year offers a gift that midnight cannot: continuity without illusion.

It does not erase the past.
It does not guarantee the future.
It simply gives us another day—and asks what we will do with it.

That is enough.


Conclusion: Why the Morning Deserves More Honor Than Midnight

We will always gather at midnight. That is human. We need ceremony. We need markers. We need shared moments.

But if we are honest, the future is shaped less by how loudly we celebrated than by how quietly we lived afterward.

The year does not change at midnight.
It changes when morning meets responsibility.

And that is where renewal—real, lasting renewal—has always begun.

Standing at Midnight: The History, Meaning, and Stories of New Year’s Eve

A collaboration between Lewis McLain & AI

Every year, at the stroke of midnight, millions of people pause—some in crowded city squares, some in living rooms, some alone. Fireworks erupt, glasses clink, and clocks roll forward. It feels celebratory, but beneath the noise lies something far older and quieter: a human instinct to stop time long enough to ask where we’ve been and whether it is safe to go on.

New Year’s Eve is not merely a party. It is one of humanity’s oldest rituals, reshaped again and again as civilizations learned to measure time, fear uncertainty, and hope for renewal.


From Chaos to Order: Why the Year Needed an Ending

The earliest New Year observances were not festive. They were protective.

Thousands of years ago, agricultural societies understood that survival depended on cycles they could not control. The Babylonians marked the new year with Akitu, a multi-day rite meant to reaffirm cosmic order, humility before the gods, and continuity of leadership. The “new year” was not a reset—it was a plea.

Ancient Rome refined this idea when Julius Caesar reformed the calendar in 46 BC. By fixing January 1 as the start of the year, Rome anchored time itself to Janus, the god who looked backward and forward at once. Romans exchanged gifts, offered sacrifices, and spoke carefully, believing the first words of the year could shape the months ahead.

From the beginning, New Year’s Eve was about thresholds—dangerous, hopeful moments when one thing ended and another had not yet begun.


Faith, Restraint, and the Moral Turn

As Christianity spread across Europe, exuberant pagan festivals fell under suspicion. The Church redirected the year’s turning toward reflection rather than revelry. For centuries, the end of the year was marked not with fireworks but with prayers, vigils, and confession.

This tradition never fully disappeared. “Watch Night” services—especially prominent in Methodist and African-American churches—framed New Year’s Eve as a sacred accounting: gratitude for survival, repentance for failures, and trust for what lay ahead.

The message was simple but demanding: celebration without reflection is shallow; reflection without hope is unbearable.


Fire, Noise, and Folk Wisdom

Outside formal religion, people preserved older instincts in folk traditions.

In Scotland’s Hogmanay, torchlight processions and fire festivals symbolized purification. In many cultures, loud noises were believed to chase away misfortune—an echo of ancient fears that the boundary between years left communities vulnerable.

What we now call “festive chaos” once served a serious purpose: protecting the future by confronting the unknown.


The Clock Takes Over: Modern New Year’s Eve Is Born

The Industrial Revolution changed everything. Once time became standardized—regulated by clocks, railways, and broadcast signals—midnight itself became the star.

In 1907, a glowing sphere descended in Times Square, creating a ritual that transformed New Year’s Eve into a shared national moment. Later, television turned it global. Fireworks over Sydney now greet the year before much of the world is awake, passing the celebration westward like a torch.

New Year’s Eve became less about survival and more about synchronization—humanity counting together.


Noteworthy Stories That Shaped the Meaning

1. Vows Older Than Resolutions

Modern New Year’s resolutions often feel flimsy, but their roots are ancient. Babylonians made promises to repay debts and return borrowed tools. Romans vowed loyalty and moral improvement. What changed is not the impulse, but our patience.

The failure of resolutions is not proof of their foolishness—it is evidence that self-examination has always been hard.


2. Midnight in Wartime

One of the most poignant New Year stories comes not from a party, but from silence.

During World War I, soldiers wrote letters describing New Year’s Eve in the trenches—cold, dark, uncertain. In some places, guns fell quiet at midnight. Men on opposite sides marked the passing year with prayers rather than gunfire, unsure if they would see another.

The calendar turned, but the war did not end. The moment mattered anyway.


3. The Baby New Year

The image of a diaper-clad infant replacing an old man with a beard emerged in 19th-century America. It is sentimental, but revealing. The symbol suggests not erasure of the past, but inheritance: the old year hands something unfinished to the new.

The baby does not judge the year that was. It simply receives it.


Why We Still Gather

Despite centuries of change, New Year’s Eve retains its core tension:

  • We celebrate because survival deserves joy.
  • We reflect because denial is dangerous.
  • We hope because despair is unsustainable.

Fireworks today are not so different from ancient fires. They declare, in light and sound, that we are still here.


The Deeper Meaning of Midnight

New Year’s Eve is not about pretending the past did not happen. It is about acknowledging that time moved forward anyway.

At midnight, we stand in a narrow space where memory and possibility overlap. We look back—not to relive—but to understand. We look forward—not to predict—but to commit.

That is why the ritual endures.


Conclusion: The Year Ends Whether We Pay Attention or Not

The calendar will turn without our consent. What remains a choice is whether we notice.

Across civilizations, faiths, wars, and technologies, New Year’s Eve has survived because it answers a human need deeper than celebration:

To pause long enough to tell the truth—then step forward anyway.

Fireworks fade. Music ends. Glasses are set down.
But the quiet question lingers into the first morning of the year:

Given what we now know, how shall we live the days we’ve been given next?

That question—asked honestly—is the oldest New Year’s tradition of all.


The Handoff

Midnight is not an ending so much as a transfer.

One year does not disappear when the clock strikes twelve; it places its weight gently—but firmly—into the hands of the next. What we learned does not evaporate. What we failed to do does not reset. What endured does not need to be announced again.

New Year’s Eve marks the moment when time pauses just long enough to look both ways. But the work of living has never belonged to midnight. It belongs to the hours that follow—when the noise fades, when the lights dim, and when responsibility returns without ceremony.

The celebration marks the handoff.
The morning receives it.

And so, having stood at midnight and named what this turning means, it is right to ask what comes next—not with promises shouted into the dark, but with attention offered quietly in the light of a new day.


When the Holidays Press In: Recent Texas Tragedies and a Call to Awareness

A collaboration between Lewis McLain & AI

In the days surrounding Christmas, several Texas communities awoke to grim headlines—family-related killings that unfolded not in public places, but inside homes. These cases remain under investigation. The reasons are not yet known, and in some instances may never be fully understood. Still, the timing of these events—clustered around a season commonly associated with joy and togetherness—has prompted renewed concern about how holidays can intensify pressures already present in many lives.

What the News Reports—Briefly and Factually

In Grand Prairie, police responded late at night to a family-violence call. According to investigators, a man shot his wife inside their home and later died from a self-inflicted gunshot wound. Their adult son was injured but survived after escaping and calling 911. Officers described the scene as a domestic tragedy with no ongoing threat to the public. The investigation continues, and authorities have not released a motive.

In McKinney, officers conducting a welfare check discovered an elderly couple dead in their home, both victims of homicide. While clearing the residence, police encountered the couple’s adult son, armed with a firearm. Officers shot him after he failed to comply with commands. He survived and has been charged in connection with his parents’ deaths. Officials have emphasized that details remain under investigation and have cautioned against speculation.

Elsewhere in Texas during the holiday period, authorities have reported additional family-related killings, including cases involving intimate partners and children present in the home. In some instances, police noted prior disturbance calls; in others, no public history has been released. Across these reports, one common thread stands out: the violence occurred within close relationships, during a time of year when stress is often high and support systems can be strained.

What These Stories Illustrate—Without Explaining Them

None of these cases proves that the holidays cause violence. The news does not say that. Law enforcement has not said that. But the clustering of tragedies during this season illustrates something widely acknowledged by counselors, clergy, and first responders: holidays can amplify pressures that already exist.

The holiday season compresses time and expectations. Financial strain increases. Work and school routines shift or disappear. Families spend more time together—sometimes healing, sometimes reopening old wounds. Grief is sharper for those who have lost loved ones. Loneliness is heavier for those who feel forgotten. For people already struggling with mental illness, addiction, despair, or anger, the margin for coping can narrow quickly.

Violence rarely begins at the moment it erupts. More often, it follows a long buildup of unaddressed pain, shame, fear, or perceived failure. The holidays can act as a mirror—reflecting not only what is celebrated, but also what is missing. When expectations collide with reality, and when isolation replaces connection, the risk of harm rises.

An Urgent Caution—For Families and Communities

These recent Texas stories are not puzzles to be solved from afar. They are warnings to be heeded close to home.

They remind us to:

  • take signs of distress seriously, especially sudden withdrawal, volatility, or hopeless talk;
  • recognize that “togetherness” can be difficult or even dangerous for some families;
  • understand that asking for help is not a weakness but a necessary intervention;
  • remember that stepping away from a heated situation can be an act of love.

The most dangerous assumption during the holidays may be that everyone else is fine.

A Prayer

God of mercy and peace,

We come before You mindful of lives lost and families shattered,
especially in a season meant for light and hope.

Hold close those who grieve tonight—
those whose homes are quiet when they should be full,
and those whose hearts carry questions without answers.

For those living under heavy pressure—
weighed down by fear, anger, loneliness, illness, or despair—
grant clarity before harm, courage to ask for help,
and the presence of someone who will listen.

Give wisdom to families, neighbors, pastors, counselors, and first responders
to notice distress, to intervene with compassion,
and to act before silence turns into tragedy.

Teach us to be gentle with one another,
patient in conflict,
and quick to choose life, restraint, and love.

In this season, may Your peace enter the places
where celebration feels hardest,
and may Your light reach even the darkest rooms.

Amen.