Any serious discussion of Texas local government must begin with a foundational constitutional fact:
In the United States, there are only two levels of sovereign government: the federal government and the states.
That is the full list.
Counties, cities, school districts, special districts, authorities, councils, boards, and commissions are not sovereign. They possess no inherent authority. They exist only because a state legislature has chosen to delegate specific powers to them, and those powers may be expanded, limited, preempted, reorganized, or withdrawn entirely.
Texas local government is therefore not a story of decentralization. It is a story of delegated administration, followed—inevitably—by state-directed coordination when delegation produced excessive fragmentation.
The State of Texas as Sovereign and System Designer
The State of Texas is sovereign within its constitutional sphere. That sovereignty includes the authority to:
Create local governments
Define and limit their powers
Redraw or freeze their boundaries
Preempt their ordinances
Reorganize or abolish them
Local governments are not junior partners in sovereignty. They are instruments through which the state governs a vast and diverse territory.
From the beginning, Texas made a defining structural choice: rather than consolidate government as complexity increased, it would delegate narrowly, preserve local identity, and retain sovereignty at the state level. That choice explains the layered system that followed.
Counties: The First Subdivision of State Power
Counties were Texas’s original subdivision of state authority, adopted after independence and statehood from Anglo-American legal traditions.
They were designed for a frontier world:
Sparse population
Horseback travel
Local courts
Recordkeeping
Elections
Law enforcement
During the 19th century, Texas rapidly carved itself into counties so residents could reach a county seat in roughly a day’s travel. By the early 20th century, the county map had largely frozen at 254 counties, a number that remains unchanged today.
Counties are constitutional entities, but they are governed strictly by Dillon’s Rule. They have no inherent powers, no residual authority, and little flexibility to adapt structurally. Once the county map was locked in place, counties became increasingly mismatched to Texas’s urbanizing reality—too small in some areas, too weak in others, and too rigid everywhere.
Rather than consolidate counties, Texas chose to work around them.
Dillon’s Rule: The Legal Engine of Delegation
The doctrine that made this system possible is Dillon’s Rule, named after John Forrest Dillon (1831–1914), Chief Justice of the Iowa Supreme Court and later a professor at Columbia Law School. His 1872 treatise, Commentaries on the Law of Municipal Corporations, emerged during a period of explosive city growth and widespread municipal corruption.
Dillon rejected the notion that local governments possessed inherent authority. He articulated a rule designed to preserve state supremacy:
A local government may exercise only (1) powers expressly granted by the legislature, (2) powers necessarily implied from those grants, and (3) powers essential to its declared purpose—not merely convenient, but indispensable. Any reasonable doubt is resolved against the local government.
Texas did not merely adopt Dillon’s Rule; it embedded it structurally. Counties, special districts, ISDs, and authorities operate squarely under Dillon’s Rule. Even cities escape it only partially through home-rule charters, and only to the extent the Legislature allows.
Dillon’s Rule explains why Texas governance favors many narrow entities over few powerful ones.
Cities: Delegated Urban Management, Not Local Sovereignty
As towns grew denser, counties proved incapable of providing urban services. The state responded by authorizing cities to manage:
Police and fire protection
Streets and utilities
Zoning and land use
Local infrastructure
Cities are therefore delegated urban managers, not sovereign governments.
Texas later adopted home-rule charters to give larger cities greater flexibility, but home rule is widely misunderstood. It does not reverse Dillon’s Rule. It merely allows cities to act unless prohibited—while preserving the Legislature’s power to preempt, override, or limit local authority at any time.
Recent state preemption is not a breakdown of the system. It is the system operating as designed.
Independent School Districts: Function Over Geography
Education exposed the limits of place-based governance earlier than any other function.
Counties were too uneven. Cities were too political. Education required stability, long planning horizons, and uniform oversight.
Texas responded by removing education from both counties and cities and creating Independent School Districts.
ISDs are:
Single-purpose governments
Granted independent taxing authority
Authorized to issue bonds
Subject to state curriculum and accountability mandates
ISDs do not answer to cities or counties. They answer directly to the state. This was one of Texas’s earliest and clearest moves toward functional specialization over territorial governance.
Special Districts: Precision Instead of Consolidation
As Texas industrialized and urbanized in the 20th century, the Legislature faced increasingly specific problems:
Flood control
Water supply
Drainage
Fire protection
Hospitals
Ports and navigation
Rather than expand general-purpose governments, Texas created special districts—single-mission entities with narrow authority and dedicated funding streams.
Special districts are not accidental inefficiencies. They reflect a deliberate state preference:
Solve problems with precision, not with consolidation.
The result was effectiveness and speed, at the cost of growing fragmentation.
MUDs and Authorities: Growth and Risk as State Policy
Municipal Utility Districts and authorities are often mistaken for private or quasi-private entities. Legally, they are governments.
MUDs:
Are created under state law
Levy taxes
Issue bonds
Are governed by elected boards
Provide essential infrastructure
They allow the state to:
Enable development before cities arrive
Finance infrastructure without municipal debt
Shift costs to future residents
Avoid restructuring counties
Similarly, transit authorities, toll authorities, housing authorities, and local government corporations exist to isolate risk, bypass constitutional debt limits, and accelerate projects. These are not loopholes. They are state-designed instruments.
The Consequence: Functional Fragmentation
By the mid-20th century, Texas governance had become highly functional—and deeply fragmented:
Fixed counties
Expanding cities
Independent ISDs
Thousands of special districts
Authorities operating alongside cities
Infrastructure crossing every boundary
The system worked locally, but failed regionally.
No entity could plan coherently across jurisdictions. Funding decisions conflicted. Infrastructure systems overlapped. Federal requirements could not be met cleanly. At this point, Texas made another defining choice.
It did not consolidate governments. It pulled planning and coordination back upward, closer to the state.
Councils of Governments: State-Authorized Coordination
Beginning in the 1960s, Texas authorized Councils of Governments (COGs) to address fragmentation.
Today:
24 COGs cover the entire state
Each spans multiple counties
Membership includes cities, counties, ISDs, and districts
COGs:
Have no taxing authority
Have no regulatory power
Have no police power
They exist to coordinate, not to govern—to reconnect what delegation had scattered. Their weakness is intentional. They sit conceptually just beneath the state, not beneath local governments.
MPOs: Transportation Planning Pulled Upward
Transportation forced an even clearer pull-back.
Texas has 25 Metropolitan Planning Organizations, designated by the state to comply with federal law. MPOs plan, prioritize, and allocate federal transportation funding. They do not build roads, levy taxes, or override governments.
MPOs act as planning membranes between federal mandates and Texas’s fragmented local structure.
Water: Where Texas Explicitly Rejected Fragmentation
Water planning most clearly demonstrates the limits of local delegation.
Texas spans 15 major river basins, with annual rainfall ranging from under 10 inches in the west to over 50 inches in the east. Water ignores counties, cities, ISDs, and districts entirely.
Texas responded by creating:
Approximately 23 river authorities, organized by watershed
16 Regional Water Planning Areas, overseen by the Texas Water Development Board
A unified State Water Plan, adopted by the Legislature
Regional Water Planning Groups govern planning, not operations. Funding eligibility flows from compliance. This is state-directed regional planning with local execution.
Texas also created 95+ Groundwater Conservation Districts, organized by aquifer rather than politics—another instance of function overriding geography.
Public Health and Other Quiet Pull-Backs
Public health produced the same result. Disease ignores jurisdictional lines. Texas authorized county, city-county, and multi-county health districts to exercise delegated state police powers regionally.
The same pattern appears elsewhere:
Emergency management regions
Workforce development boards
Judicial administrative regions
20 Education Service Centers
Air-quality nonattainment regions
Each represents the same logic:
Delegation fragments
Fragmentation impairs system performance
The state restores coordination without transferring sovereignty
Final Synthesis
Texas local government did not evolve haphazardly. It followed a consistent philosophy:
Preserve sovereignty at the state level
Delegate functions narrowly
Avoid consolidation
Specialize relentlessly
Pull planning back upward when fragmentation becomes unmanageable
What appears complex or chaotic is actually layered intent.
Services are delegated downward. Planning is pulled back upward. Sovereignty never moves.
That tension—between delegation and coordination—is not a flaw in Texas government. It is its defining structural feature.
In a recent Free Press article, Nikki Haley argues that China is not waiting for a future war with the United States but is already engaged in a long-term, strategic campaign designed to weaken America without firing a shot. Rather than tanks or missiles, the tools are economic leverage, technological dependence, information manipulation, and political pressure—applied patiently over time to erode American confidence, unity, and resolve. The article’s most provocative insight is that Americans tend to think of war only as something declared and visible, while adversaries like China think in terms of psychological advantage, influence, and internal fracture.
That framing raised a deeper question to me: if the most effective way to weaken a democracy is to turn its citizens against one another, how vulnerable is the United States to hatred, distrust, and internal division—and what responsibility do citizens themselves bear in resisting it? Lastly, does this provide insight into events happening in our own back yard?
When the Enemy Wants You to Hate Your Neighbor
How Foreign Adversaries Exploit Division to Weaken American Institutions
Introduction: The War That Doesn’t Look Like a War
For most of American history, threats to national security arrived in visible forms: armies, missiles, uniforms, borders crossed. Today, the most dangerous threats often arrive silently—through phones, feeds, narratives, and emotions. Even through blogs like mine.
China, along with other foreign adversaries, does not need to defeat the United States on the battlefield to weaken it. A far cheaper and safer strategy exists: encourage Americans to distrust one another, despise their institutions, and lose faith in the idea that shared rules and shared facts can still bind a diverse society together.
This is not a conspiracy theory, nor is it uniquely Chinese. It is a well-documented form of modern statecraft often called information warfare, influence operations, or gray-zone conflict—competition deliberately kept below the threshold of open war.
The danger is not that Americans will suddenly become loyal to a foreign power. The danger is that Americans will begin to see each other as enemies, and their own institutions as illegitimate. When that happens, a society weakens itself from the inside.
The Strategic Objective: Fracture, Don’t Conquer
Foreign adversaries pursuing this strategy are not trying to persuade Americans of a single ideology. Their objective is simpler and more corrosive:
Reduce trust in elections
Reduce trust in courts and law enforcement
Reduce trust in journalism and expertise
Reduce trust in fellow citizens’ good faith
A divided society expends enormous energy fighting itself. It becomes harder to govern, slower to respond to crises, and more vulnerable to paralysis or authoritarian temptation.
Importantly, this strategy does not require creating new grievances. It relies on identifying existing ones—racial tensions, economic inequality, cultural change, immigration, crime, public health, religion—and amplifying them until compromise feels immoral and disagreement feels existential.
How Influence Operations Actually Work
Amplification, Not Invention
Foreign actors rarely invent American problems. They amplify real ones.
If a topic already produces anger, resentment, or fear, it is useful. If it already divides Americans into camps, it is valuable. The operation succeeds when people believe:
“My opponents are not merely wrong — they are dangerous.”
“Both-Sides” Escalation
One of the most misunderstood aspects of modern influence operations is that opposing sides are often targeted simultaneously.
One group is fed content that reinforces grievance, victimhood, or moral urgency. The opposing group is fed content that reinforces fear, resentment, or betrayal.
Each side becomes proof of the other side’s worst assumptions.
The goal is not ideological victory. The goal is maximum polarization.
Emotional Manipulation Over Persuasion
Facts matter less than feelings.
Content that spreads fastest tends to trigger anger, fear, humiliation, and moral outrage. Foreign influence campaigns exploit this reality. They do not aim to win debates; they aim to trigger reactions. Once emotion dominates, people share and escalate on behalf of the adversary—often unknowingly.
Erosion of the Referees
A healthy democracy depends on referees: election administrators, courts, professional journalism, and scientific expertise.
Foreign adversaries benefit when Americans believe all referees are corrupt or illegitimate. Once people conclude that elections are rigged, courts are political weapons, media lies by definition, and experts are propagandists, no outcome is accepted as fair. Only power remains.
Digital Architecture as a Force Multiplier
Modern platforms unintentionally reward the very behaviors foreign adversaries exploit. Outrage spreads faster than explanation. Certainty spreads faster than humility. Identity signaling spreads faster than evidence.
Foreign actors do not need to control these systems. They study them, learn what triggers Americans, and inject content accordingly.
Why the United States Is Especially Vulnerable
America’s greatest strengths—free expression, pluralism, open debate—also create vulnerability. Democracies must tolerate disagreement without letting it metastasize into hatred.
Foreign influence operations succeed only where fractures already exist. This leads to an uncomfortable truth: foreign adversaries do not create American divisions; they accelerate them.
As shared reality erodes, persuasion collapses. Only mobilization remains.
What “Success” Looks Like for the Adversary
Foreign influence is succeeding when:
Bad faith becomes the default assumption
Moderates withdraw from public discourse
Institutions lose legitimacy permanently
Violence begins to feel understandable
None of this requires a single decisive moment. It unfolds gradually through normalization.
What Actually Works as Defense
Broad censorship does not work. Government-decided “truth” does not work. Suppressing dissent backfires.
What works is resilience.
At the citizen level: pause before sharing, question emotional manipulation, distinguish outrage from importance, and practice restating opposing arguments fairly.
At the community level: real relationships, churches, civic groups, and local institutions reduce radicalization.
At the institutional level: transparency, humility, and consistent rule-following rebuild trust.
At the government level: exposing foreign operations, protecting elections, improving transparency, and investing in civic education—without policing viewpoints.
The Hard Truth
Foreign adversaries can encourage Americans to hate one another. They cannot force it.
They succeed only when Americans abandon restraint, humility, and shared rules.
A society capable of disagreement without dehumanization is extraordinarily difficult to destabilize.
Conclusion: The Strongest Defense Is Civic Character
The greatest threat to the United States today is not a foreign army crossing a border. It is the slow erosion of trust that makes self-government impossible.
China and other adversaries understand this. They study American psychology and culture not to convert Americans, but to divide them.
A republic survives not because it agrees, but because it argues without hatred.
Appendix A
A Christian Perspective on Division, Hatred, and the Subtle Work of the Enemy
Why a Christian Appendix Belongs Here
For Christians, the idea that an “enemy” seeks to divide people against one another is not metaphorical. It is foundational theology.
Scripture does not portray evil primarily as chaos or madness, but as order bent just enough to destroy love, trust, and truth. The Christian understanding of spiritual opposition is not that it invents sin from nothing, but that it twists what already exists—fear into hatred, conviction into self-righteousness, disagreement into contempt.
This appendix is not an attempt to spiritualize geopolitics. It is an acknowledgment that moral and spiritual realities operate alongside political ones, and that Christians, in particular, are warned repeatedly about becoming instruments of division while believing themselves righteous. Spiritual Warfare is as real to the Christian as military war games are to field commanders.
The Enemy’s Oldest Strategy: Divide and Accuse
In the Christian tradition, Satan is not merely a tempter but an accuser.
He accuses God to humanity. He accuses humanity to God. He accuses neighbor to neighbor.
Division is not collateral damage; it is the goal.
Jesus himself names this dynamic:
“Every kingdom divided against itself will be ruined, and every city or household divided against itself will not stand.” (Matthew 12:25)
The warning is not about foreign invasion. It is about internal fracture.
C. S. Lewis’s The Screwtape Letters reveals that evil is most effective when it is subtle.
The senior devil does not urge spectacular sin. He urges irritation, distraction, self-justification, and contempt disguised as clarity.
Scripture warns:
“If I have all knowledge, but have not love, I am nothing.” (1 Corinthians 13:2)
Conviction without love multiplies evil rather than defeating it.
The Weekend That Gave Birth to Screwtape
In July 1940, Lewis listened to Adolf Hitler speak on the radio. He later admitted how unsettlingly persuasive the speech felt—not because he believed it, but because he sensed how easily emotion could be moved even against reason.
The next day, sitting in church, Lewis conceived the idea of a book: letters from a senior devil to a junior devil advising how to misguide a Christian. He first called it As One Devil to Another.
That book became The Screwtape Letters.
Lewis understood that evil rarely presents itself as evil. It presents itself as reasonable, urgent, and justified. It borrows the language of virtue while hollowing it out.
The Enemy does not need believers to abandon truth. He only needs them to abandon love while wielding truth as a weapon.
When Moral Certainty Becomes a Spiritual Trap
Christians are especially vulnerable to confusing righteousness with being right.
Jesus’ harshest rebukes were aimed not at pagans but at the religiously certain:
“You strain out a gnat but swallow a camel.” (Matthew 23)
When disagreement feels like an attack on the soul, hatred begins to feel holy.
The Subtle Corruption of Truth
Christian theology teaches that Satan distorts truth rather than simply lying:
“Even Satan disguises himself as an angel of light.” (2 Corinthians 11:14)
True facts can be arranged to deceive. Real injustices can be framed to inflame. Truth without love becomes a tool of destruction.
Loving One’s Neighbor as National Security
If foreign adversaries benefit when Americans hate each other, then love of neighbor is not merely spiritual virtue—it is civic resilience.
“Love your neighbor as yourself.” (Matthew 22:39)
A society that refuses to dehumanize opponents is extraordinarily hard to fracture.
The Christian’s Temptation
“What does it profit a man to gain the whole world and forfeit his soul?” (Mark 8:36)
Winning political battles by adopting the Enemy’s methods means the Enemy has already won.
A Rule of Discernment
Does this message move me toward love of neighbor—or toward contempt disguised as righteousness?
If the latter, it should be resisted.
“The anger of man does not produce the righteousness of God.” (James 1:20)
Final Christian Reflection
Foreign adversaries may exploit division, but they are not the deepest threat.
The deeper threat is that believers, convinced they are fighting evil, may unknowingly serve it.
The Enemy does not require Christians to abandon their faith. He only needs them to forget its hardest commands.
Public tragedies have a way of collapsing time. Old debates are reopened as if they were never had. Long-standing policies are treated as provisional. And political reflexes reassert themselves with a familiar urgency: something must be done, and whatever is done must be fast, visible, and legislative.
A recent Reuters report describing a mass shooting at a beachside gathering in Australia illustrates this pattern with uncomfortable clarity. The event itself was horrifying. The response was predictable. Within hours, political leaders were discussing emergency parliamentary sessions, tightening gun licensing laws, and revisiting a firearm regime that has been in place for nearly three decades.
What makes this episode especially instructive is not that it occurred in Australia, but that it occurred despite Australia’s reputation for having among the strictest gun control laws in the world. The country’s post-1996 framework—created in the wake of the Port Arthur massacre—has long been cited internationally as a model of decisive legislative action. Yet here, after decades of regulation, registration, licensing, and oversight, the instinctive answer remains the same: more law.
This essay treats the Australian response not as an anomaly, but as a continuation—and confirmation—of two arguments I have made previously: one concerning mass shootings as a systems failure rather than a purely legal failure, and another concerning what I have called “one-page laws”—the belief that complex social problems can be solved by concise statutes and urgent press conferences.
The Reuters Story, Paraphrased
According to Reuters, a deadly shooting at a public gathering in Bondi shocked Australians and immediately raised questions about whether the country’s long-standing firearms regime remains adequate. One of the suspects reportedly held a legal gun license and was authorized to own multiple firearms. In response, state and federal officials suggested that parliament might be recalled to consider reforms, including changes to license duration, suitability assessments, and firearm ownership limits.
The article notes that while Australia’s gun laws dramatically reduced firearm deaths after 1996, the number of legally owned guns has since risen to levels exceeding those prior to the reforms. Advocates argue that this growth, combined with modern risks, requires updated legislation. Political leaders signaled openness to acting quickly.
What the article does not do—and what most post-tragedy coverage does not do—is explain precisely how additional laws would have prevented this specific act, or how such laws would be meaningfully enforced without expanding surveillance, discretion, or intrusion into everyday life.
That omission is not accidental. It reflects a deeper habit in public governance.
The First Essay Revisited: Mass Shootings as Systems Failures
In my earlier essay on mass shootings, I argued that these events are rarely the result of a single legal gap. Instead, they emerge from systemic breakdowns: failures of detection, communication, intervention, and follow-through. Warning signs often exist. Signals are missed, dismissed, or siloed. Institutions act sequentially rather than collectively.
The presence or absence of one additional statute does little to alter those dynamics.
The Australian case reinforces this point. The suspect was not operating in a legal vacuum. The system already required licensing, registration, and approval. The breakdown did not occur because the law was silent; it occurred because law is only one input into a much larger human system.
When tragedy strikes, however, it is far easier to amend a statute than to admit that prevention depends on imperfect human judgment, social cohesion, mental health systems, community reporting, and inter-agency coordination. Laws are tangible. Systems are messy.
The Second Essay Revisited: The Illusion of One-Page Laws
My essay on one-page laws addressed a related but broader problem: the temptation to treat legislation as a substitute for governance.
One-page laws share several characteristics:
They are easy to describe.
They signal moral seriousness.
They create the appearance of action.
They externalize complexity.
The harder questions—Who enforces this? How often? With what discretion? At what cost? With what error rate?—are deferred or ignored.
The Australian response fits this pattern precisely. Proposals to shorten license durations or tighten suitability standards sound decisive, but they conceal the real burden: reviewing thousands of existing licenses, detecting future risk in people who have not yet exhibited it, and doing so without violating basic principles of fairness or due process.
The law can authorize action. It cannot supply foresight.
Where the Two Essays Converge
Taken together, these two arguments point to a shared conclusion: legislation is often mistaken for resolution.
Mass violence is not primarily a legislative failure; it is a detection and intervention failure. One-page laws feel comforting because they compress complexity into moral clarity. But compression is not the same as control.
Australia’s experience underscores a difficult truth: once a society has implemented baseline restrictions, further legislative tightening produces diminishing returns. The remaining risk lies not in legal gaps, but in human unpredictability. Eliminating that last fraction of risk would require levels of monitoring and preemption that most free societies rightly reject.
This is the trade-off no emergency session of parliament wants to articulate.
Why the Reflex Persists
The rush to legislate after tragedy is not irrational—it is political. Laws are visible acts of leadership. They reassure the public that order is being restored. Admitting that not every horror can be prevented without dismantling civil society is a harder message to deliver.
But honesty matters.
Governance is not the art of passing laws; it is the discipline of building systems that function under stress. When tragedy is followed immediately by legislative theater, it risks substituting symbolism for substance and urgency for effectiveness.
Conclusion
The Bondi shooting is not evidence that Australia’s gun laws have failed in some absolute sense. Nor is it proof that further legislation will succeed. What it is is a case study—one that reinforces two prior conclusions:
First, that mass violence persists even in highly regulated environments because it arises from human systems, not statutory voids.
Second, that one-page laws offer emotional relief but rarely operational solutions.
Serious problems deserve serious thinking. Not every response can be reduced to a bill number and a headline. And not every tragedy has a legislative cure.
The real challenge is resisting the comforting illusion that lawmaking alone is governance—and doing the slower, quieter, less visible work of strengthening the systems that stand between instability and catastrophe.
A collaboration between Lewis McLain, Paul Grimes & AI (Idea prompted by Paul Grimes, City Manager of McKinney)
Urban Theory Meets the New Texas Growth Regime
I. Why cities experience service growth faster than population
Cities rarely experience growth as a smooth, proportional process. Long before population numbers appear alarming, residents begin to sense strain: longer response times, crowded facilities, rising calls for service, and increasing friction in public space. The discrepancy between modest population growth and outsized service demand has been observed across cities and eras, and it has produced a deep body of urban theory seeking to explain why cities behave this way.
Across disciplines, a shared conclusion emerges: density increases interaction, and interaction accelerates outcomes. These outcomes include innovation, productivity, and cultural vitality—but also conflict, disorder, and service demand. What varies among theorists is not the mechanism itself, but how cities can shape, moderate, or absorb its consequences.
II. Geoffrey West and the mathematics of acceleration
Geoffrey West’s contribution is foundational because it removes morality, politics, and culture from the initial explanation. Cities, in his framework, are not collections of individuals; they are networks. As networks grow denser, the number of possible interactions grows faster than the number of nodes. This produces superlinear scaling in many urban outputs. When population doubles, certain outcomes more than double.
Crucially, West shows that the same mathematical logic governs both positive and negative outcomes. Innovation and GDP rise superlinearly; so do some forms of crime, disease transmission, and social friction. The implication is unsettling but clarifying: cities are social accelerators by design. Service demand tied to interaction will often grow faster than population, not because governance has failed, but because the underlying structure makes it inevitable.
West assumes, however, that cities respond to acceleration by reinventing themselves—upgrading systems, redesigning institutions, and continuously adapting. That assumption becomes important later.
III. Jane Jacobs and the conditions that turn density into order
Jane Jacobs does not dispute that density increases interaction. Her work asks a different question: what kind of interaction?
Jacobs argues that dense places can be remarkably safe and resilient when they are mixed-use, human-scaled, and continuously occupied. Her concept of “eyes on the street” is not sentimental; it is a theory of informal governance. In healthy neighborhoods, constant presence creates passive supervision. People notice deviations. Streets regulate themselves long before police are required.
But Jacobs is equally clear about the failure mode. Density without diversity—large single-use developments, commuter-only corridors, or isolated residential blocks—removes the stabilizing feedback loops. Interaction still increases, but it becomes episodic, anonymous, and harder to regulate informally. In those conditions, service demand rises sharply.
Jacobs therefore reframes West’s mathematics: density raises interaction; urban form determines whether interaction stabilizes or combusts.
IV. Sampson and the social capacity to absorb friction
Robert Sampson’s work further refines the picture by introducing collective efficacy—the capacity of a community to maintain order through shared norms and willingness to intervene. His research demonstrates that dense or disadvantaged neighborhoods do not inevitably experience high crime. Where social cohesion is strong and institutions are trusted, communities suppress disorder even under pressure.
This matters because it shows that service demand is not driven by density alone. Two areas with similar physical form can generate radically different workloads depending on stability, tenure, turnover, and informal social control. For forecasting, Sampson’s insight is critical: interaction becomes costly when social capacity erodes.
V. Glaeser, incentives, and why density keeps happening
Edward Glaeser explains why density persists despite its costs. Proximity is economically powerful. Dense cities match labor and opportunity more efficiently, transmit knowledge faster, and generate wealth. These benefits accrue quickly and privately, while the costs—service strain, infrastructure wear, social friction—arrive later and publicly.
This asymmetry explains why development pressure is relentless and why political systems often favor growth even when local governments struggle to keep up. Density is not an accident; it is the predictable outcome of incentives embedded in land markets and regional economies.
VI. Scott and the danger of simplified governance
James C. Scott provides the warning. Governments, he argues, tend to simplify complex systems into legible categories because they are easier to manage. But cities function through local variation, informal practices, and spatial nuance. When governance relies too heavily on abstract averages—per-capita ratios, citywide forecasts—it often misses where strain actually emerges.
Service demand concentrates in places, not evenly among people. This is why cities often feel stressed long before the spreadsheets confirm it.
VII. The missing assumption: cities control the form of their own growth
Despite their differences, these thinkers share a quiet assumption: the city experiencing density also has authority over that density. West assumes institutional reinvention is possible. Jacobs assumes local control over land use and street design. Sampson assumes neighborhoods evolve within a municipal framework. Glaeser assumes prosperity helps fund adaptation. Scott assumes the state has power, even if it misuses it.
That assumption no longer reliably holds in Texas.
VIII. The Texas legislative shift: density without authority
Over the past decade, Texas has steadily constrained municipal authority over annexation and extraterritorial jurisdiction while expanding developer freedom. Growth has not slowed; it has been redirected. Increasingly, large, dense developments are built outside city limits, beyond zoning authority, and often beyond meaningful density control.
Yet interaction does not stop at the city line. Residents of these developments commute through cities, use city roads, access city amenities, and generate service demand that cities are often contractually or practically compelled to address. The result is a new condition: density without authority.
This interrupts the thinkers’ chain of logic. Interaction still accelerates. Service demand still rises. But the city’s ability to shape the form, timing, and integration of growth is weakened. Institutional adaptation becomes reactive rather than formative.
IX. Houston and the path North Texas is now taking
This pattern is not new statewide. The Houston region has long operated under fragmented governance: cities, counties, MUDs, and special districts collectively producing urban form without a single coordinating authority. Houston’s growth model has always relied on externalized infrastructure finance and delayed incorporation.
North Texas historically followed a different path. Cities like McKinney and Plano grew through annexation, internalized infrastructure, and municipal sequencing. Density, services, and revenue were aligned.
Texas policy has changed that trajectory. North Texas is being pushed toward a Houston-style future—not by local choice, but by legal structure.
X. Aging: the force that converts today’s growth into tomorrow’s strain
Growth does not remain new. Aging is the force that locks in consequences.
A city dominated by 0–5 year-old apartments is operationally different from the same city thirty years later. As housing stock ages, rents soften, tenant turnover increases, maintenance is deferred, and informal adaptations emerge. The same density produces more service demand over time. Homeowners turning to renters are two different types of censuses in the same city.
Infrastructure ages alongside housing. Systems built in growth waves fail in cohorts. Maintenance demands converge. Replacement cycles collide with operating budgets. Even if population stabilizes, service pressure intensifies.
Aging transforms density from an abstract risk into a concrete workload.
XI. Schools as the clearest signal of the lifecycle mismatch
School closures—such as those experienced by McKinney ISD and many other Texas districts—are not isolated education issues. They are urban lifecycle signals.
When cities are young:
Family formation is high
Enrollment grows
Schools are built quickly
As housing ages:
Household size shrinks
Families age in place
Single-family homes convert to rentals
Multifamily units turn over rapidly
Student yield per unit declines
At the same time, infrastructure and neighborhoods age, and service demand rises elsewhere. Police calls, code enforcement, and social services grow even as schools empty. This is the paradox many Texas cities now face: closing schools in growing cities.
School closures therefore mark the transition from growth-driven demand to aging-driven demand. They reveal that population alone no longer explains service needs.
XII. The compounding effect of ETJ growth and aging
ETJ-driven development postpones this reckoning but does not prevent it. New developments outside city limits age just as surely as those inside. When they do, cities face a delayed shock: aging neighborhoods and infrastructure they did not shape, often without full fiscal integration. New growth in the ETJ require new local schools. Capacity of old schools cannot be absorbed by new growth as buses and distances act as a constraint.
Houston has lived with this reality for decades. North Texas is entering it now.
XIII. Conclusion: a new urban regime
The urban theorists remain correct about density, interaction, and acceleration. What Texas has altered is the governing environment in which those forces play out. Annexation limits and ETJ erosion do not stop growth. They delay accountability. Aging ensures that delay is temporary.
For cities like McKinney, the future is not simply more growth, nor even more density. It is a shift toward a fragmented, aging, interaction-heavy urban form—one that increasingly resembles Houston’s long-standing condition rather than North Texas’s historical model.
Understanding this arc—density → interaction → aging → service strain, under diminished local control—is essential before any discussion of elasticity, finance, or sustainability can be honest. Great thinkers are rethinking!
Appendix A
Key Thinkers, Publications, and Intellectual Contributions Referenced in This Essay
This appendix summarizes the principal authors and works referenced in the essay Density, Interaction, Aging, and the Fracturing of Local Control. Each has influenced modern thinking about cities, growth, density, governance, and service demand. The summaries below are intentionally descriptive rather than argumentative.
Geoffrey West
Primary Works
Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies (2017)
Bettencourt, L. M. A., et al., “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences (PNAS)
Core Contribution West applies principles from physics and network theory to biological and social systems. His work demonstrates that many urban outputs—economic production, innovation, and certain social pathologies—scale superlinearly with population because cities function as dense interaction networks. His framework explains why some service demands grow faster than population and why cities must continually adapt to accelerating pressures.
Relevance to the Essay Provides the mathematical foundation for understanding why interaction-driven services (public safety, emergency response, enforcement) often outpace population growth.
Jane Jacobs
Primary Works
The Death and Life of Great American Cities (1961)
Core Contribution Jacobs challenges top-down planning and argues that healthy cities depend on mixed uses, short blocks, human-scale design, and continuous street activity. Her concept of “eyes on the street” explains how informal social control stabilizes dense environments.
Relevance to the Essay Explains why density does not automatically produce disorder and how urban form determines whether interaction becomes self-regulating or service-intensive.
Robert J. Sampson
Primary Works
Great American City: Chicago and the Enduring Neighborhood Effect (2012)
Sampson, Raudenbush, and Earls, “Neighborhoods and Violent Crime: A Multilevel Study of Collective Efficacy,” Science (1997)
Core Contribution Sampson introduces the concept of collective efficacy—the ability of communities to maintain order through shared norms and informal intervention. His work demonstrates that social cohesion and neighborhood stability can suppress disorder independent of density.
Relevance to the Essay Provides the social mechanism explaining why similar densities can produce very different service demands over time.
Edward Glaeser
Primary Works
Triumph of the City (2011)
Core Contribution Glaeser emphasizes the economic benefits of density, arguing that cities exist because proximity increases productivity, innovation, and opportunity. He frames density as an economic choice driven by incentives rather than a planning failure.
Relevance to the Essay Explains why growth pressure persists despite service strain and why development tends to outpace municipal capacity to respond.
James C. Scott
Primary Works
Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (1998)
Core Contribution Scott critiques centralized planning and “legibility”—the tendency of governments to simplify complex systems into administratively convenient categories. He shows how ignoring local knowledge and spatial nuance often produces unintended consequences.
Relevance to the Essay Warns against overreliance on citywide averages and per-capita metrics in forecasting service demand.
Crime Concentration and Place-Based Policing
Key Authors
David Weisburd
Anthony Braga
Representative Works
Weisburd, “The Law of Crime Concentration at Places,” Criminology
Braga et al., studies on hot-spots policing
Core Contribution Demonstrates that crime and disorder are highly concentrated in small geographic areas rather than evenly distributed across populations.
Relevance to the Essay Supports the argument that service demand accelerates spatially and perceptually before it appears in aggregate population statistics.
Urban Economics and Land-Use Structure
Additional Influential Works
Alain Bertaud, Order Without Design (2018)
Donald Shoup, The High Cost of Free Parking (2005)
Core Contributions Bertaud emphasizes cities as labor markets shaped by land constraints rather than plans. Shoup demonstrates how parking policy distorts density, travel behavior, and land use.
Relevance to the Essay Provide supporting context for how policy choices shape interaction patterns and service demand indirectly.
Houston-Region Governance and Fragmentation
Institutions and Research
Rice University Kinder Institute for Urban Research
Texas A&M Real Estate Research Center
Core Contribution Document the long-standing use of special districts, MUDs, and fragmented governance structures in the Houston region and their implications for infrastructure, service delivery, and long-term municipal responsibility.
Relevance to the Essay Establish Houston as a precedent for the fragmented growth model North Texas is increasingly approaching.
Texas Local Government and Annexation Policy
Statutory Context
Texas Local Government Code, Chapter 42 (Extraterritorial Jurisdiction)
Legislative reforms including SB 6 (2017), HB 347 (2019), and SB 2038 (2023)
Core Contribution These changes constrain municipal annexation and weaken ETJ authority, altering the alignment between growth, governance, and service responsibility.
Relevance to the Essay Provide the legal backdrop for the “density without authority” condition described.
School District Demographic and Facility Trends
Contextual Sources
Texas Education Agency (TEA) enrollment data
District-level facility planning and consolidation reports (e.g., MISD and peer districts)
Core Contribution School closures and consolidations reflect long-term demographic shifts, housing lifecycle effects, and declining student yield in aging neighborhoods.
Relevance to the Essay Serve as a visible indicator of urban aging and lifecycle mismatch in growing cities.
Closing Note on Use
This appendix is intended to clarify intellectual provenance, not to prescribe policy positions. The essay draws from multiple disciplines—physics, sociology, economics, planning, and public administration—to explain why modern cities experience accelerating service demand under changing governance conditions.
LFM Note: My personal circle of great thinkers leaves me always yearning for more time to visit with them. Lunch with Paul Grimes always takes a deeper probe than I am expecting. A visit with David Leininger always expands my knowledge and surprises me with more than just nuances to improve my vocabulary and vision. Dan Johnson considers me one of his mentors, but he thinks so far above and ahead as he describes his way of thinking with facts mixed with a tinge of Greek mythology. Even a short visit with Dan clarifies who the real mentor is. Our conversations start off with energy and end up with us feeding off each other like two little kids making a discovery. Don Paschal has been a friend and colleague for the longest and is full of experience, wisdom, but with a refreshing biblical integration. Becky Brooks is one of my closest colleagues and like a sister in sync with common vision and analyses. There are more. But I must stop here. LFM
Few moments in ancient literature capture the moral courage required to speak truth to power as vividly as the encounter between the prophet Nathan and King David. The scene is brief, almost understated, yet it exposes a problem as old as authority itself: what happens when power no longer hears the truth.
David, at this point in the biblical story, is not a fragile leader. He is Israel’s greatest king—military hero, national symbol, and political success. His reign is stable. His enemies are subdued. His legitimacy is unquestioned. That success, however, has begun to insulate him from accountability.¹
The Bible does not soften what happens next, and it is worth telling plainly.
What David Did
One evening, David notices a woman bathing from the roof of his palace. He learns she is married to one of his own soldiers, a man currently fighting on the front lines. David summons her anyway. As king, his request carries force whether spoken gently or not. She becomes pregnant.²
David now faces exposure. Instead of confessing, he attempts to manage the situation. He recalls the husband from battle, hoping circumstances will hide the truth. When that fails, David escalates. He sends the man back to war carrying a sealed message to the commanding general—an order placing him where the fighting is fiercest and support will be withdrawn.³
The man is killed.
The machinery of power functions smoothly. No inquiry follows. David marries the widow. From the outside, the matter disappears. Politically, the problem is solved. Morally, it has only been buried.
This is the danger Scripture names without hesitation: power does not merely enable wrongdoing; it can normalize it.
Why Nathan Matters
Nathan enters the story not as a revolutionary or rival, but as a prophet—someone whose authority comes from obedience to God rather than proximity to the throne. He is not part of David’s chain of command. He does not benefit from David’s favor. That independence is everything.⁴
Nathan does not accuse David directly. Instead, he tells a story.
He describes two men in a town. One is rich, with vast flocks. The other is poor, possessing only a single lamb—so cherished it eats at his table and sleeps in his arms. When a guest arrives, the rich man does not draw from his abundance. He takes the poor man’s lamb instead.⁵
David is outraged. As king, he pronounces judgment swiftly and confidently. The man deserves punishment. Restitution. Consequences.
Then Nathan speaks the words that collapse the distance between story and reality:
**“You are the man.”**⁶
In an instant, David realizes he has judged himself. Nathan names the facts plainly: David used his power to take what was not his, destroyed a loyal man to conceal it, and assumed his position placed him beyond accountability.
This is not a trap meant to humiliate. It is truth delivered with precision. Nathan allows David’s own moral instincts—still intact beneath layers of authority—to render the verdict.
Speaking Truth to Power Is Dangerous
Nathan’s courage should not be underestimated. Kings do not respond kindly to exposure. Many prophets were imprisoned or killed for far less. Nathan risks his position, his safety, and possibly his life. He cannot know how David will react. Faithfulness here is not measured by outcome but by obedience.⁷
Speaking truth to power is rarely loud. It is rarely celebrated. It requires proximity without dependence, clarity without cruelty, and courage without illusion. Nathan does not shout from outside the palace gates. He walks directly into the seat of power and speaks.
David’s response is remarkable precisely because it is not guaranteed:
*“I have sinned against the Lord.”*⁸
Repentance does not erase consequences. Nathan makes that clear. Forgiveness and accountability coexist. The Bible refuses to confuse mercy with immunity.⁹
Why This Story Still Matters
This encounter reveals something essential about power: authority tends to surround itself with affirmation and silence. Over time, wrongdoing becomes justified, then invisible. Institutions close ranks. Loyalty replaces truth. Image replaces integrity.
Nathan represents the indispensable outsider—the one who loves truth more than access and justice more than comfort. He does not seek to destroy David. He seeks to save him from becoming a king who can no longer hear.
Scripture does not present leaders as villains by default. It presents them as dangerous precisely because they are human. Power magnifies both virtue and vice. Without truth, it corrodes.¹⁰
The Broken Hallelujah
This is where Leonard Cohen’s Hallelujah belongs—not as ornament, but as interpretation.
The song opens with David’s musical gift, his calling, his nearness to God:
“Now I’ve heard there was a secret chord That David played, and it pleased the Lord…”
But Cohen does not linger there. He moves quickly to the roof, the bath, the fall:
“You saw her bathing on the roof Her beauty and the moonlight overthrew you.”
Cohen refuses to romanticize David any more than Nathan does. He understands that David’s story is not primarily about victory, but about collapse and confession. And he understands something many listeners miss: praise spoken after exposure cannot sound the same as praise spoken before it.
That is why the refrain matters:
“It’s a broken hallelujah.”
A cheap hallelujah is easy—praise without truth, worship without repentance, confidence without cost. It thrives where power is affirmed but never confronted.¹¹
A broken hallelujah is what remains when illusion is stripped away. It is praise that has passed through judgment. It is faith no longer dependent on image, position, or success. It is what David offers in Psalm 51, after Nathan leaves and the consequences remain.¹²
Nathan does not end David’s worship. He saves it from becoming hollow.
For Our Time
Nathan’s story is not ancient trivia. It is a permanent challenge.
Every generation builds systems that reward silence and discourage dissent—governments, corporations, churches, universities, families. Power still resists accountability. Truth still carries a cost. And praise without honesty still rings empty.
Speaking truth to power does not guarantee reform. It guarantees integrity.
Nathan spoke. David listened. And centuries later, a songwriter captured what that moment sounds like from the inside—not triumphant, not resolved, but honest.
Not every hallelujah is joyful. Some are whispered. Some are broken. And those may be the ones worth hearing most.
Scripture References & Notes
David’s power and success: 2 Samuel 5–10
Bathsheba episode begins: 2 Samuel 11:1–5
Uriah’s death order: 2 Samuel 11:14–17
Nathan as prophet to David: 2 Samuel 7; 2 Samuel 12
Nathan’s parable: 2 Samuel 12:1–4
“You are the man”: 2 Samuel 12:7
Prophetic risk: cf. 1 Kings 18; Jeremiah 20:1–2
David’s confession: 2 Samuel 12:13
Consequences despite forgiveness: 2 Samuel 12:10–14
Power and accountability theme: Proverbs 29:2; Psalm 82
Now I’ve heard there was a secret chord That David played, and it pleased the Lord But you don’t really care for music, do you? It goes like this, the fourth, the fifth The minor falls, the major lifts The baffled king composing Hallelujah
Hallelujah, Hallelujah Hallelujah, Hallelujah
Your faith was strong but you needed proof You saw her bathing on the roof Her beauty and the moonlight overthrew you She tied you to a kitchen chair She broke your throne, and she cut your hair And from your lips she drew the Hallelujah
Hallelujah, Hallelujah Hallelujah, Hallelujah
You say I took the name in vain I don’t even know the name But if I did, well, really, what’s it to you? There’s a blaze of light in every word It doesn’t matter which you heard The holy or the broken Hallelujah
Hallelujah, Hallelujah Hallelujah, Hallelujah
I did my best, it wasn’t much I couldn’t feel, so I tried to touch I’ve told the truth, I didn’t come to fool you And even though it all went wrong I’ll stand before the Lord of Song With nothing on my tongue but Hallelujah
Excel, SQL Server, Power BI — With AI Doing the Heavy Lifting
A collaboration between Lewis McLain & AI
Introduction: The Skill That Now Matters Most
The most important analytical skill today is no longer memorizing syntax, mastering a single tool, or becoming a narrow specialist.
The must-have skill is knowing how to direct intelligence.
In practice, that means combining:
Excel for thinking, modeling, and scenarios
SQL Server for structure, scale, and truth
Power BI for communication and decision-making
AI as the teacher, coder, documenter, and debugger
This is not about replacing people with AI. It is about finally separating what humans are best at from what machines are best at—and letting each do their job.
1. Stop Explaining. Start Supplying.
One of the biggest mistakes people make with AI is trying to explain complex systems to it in conversation.
That is backward.
The Better Approach
If your organization has:
an 80-page budget manual
a cost allocation policy
a grant compliance guide
a financial procedures handbook
even the City Charter
Do not summarize it for AI. Give AI the document.
Then say:
“Read this entire manual. Summarize it back to me in 3–5 pages so I can confirm your understanding.”
This is where AI excels.
AI is extraordinarily good at:
absorbing long, dense documents
identifying structure and hierarchy
extracting rules, exceptions, and dependencies
restating complex material in plain language
Once AI demonstrates understanding, you can say:
“Assume this manual governs how we budget. Based on that understanding, design a new feature that…”
From that point on, AI is no longer guessing. It is operating within your rules.
This is the fundamental shift:
Humans provide authoritative context
AI provides execution, extension, and suggested next steps
You will see this principle repeated throughout this post and the appendices—because everything else builds on it.
2. The Stack Still Matters (But for Different Reasons Now)
AI does not eliminate the need for Excel, SQL Server, or Power BI. It makes them far more powerful—and far more accessible.
Excel — The Thinking and Scenario Environment
Excel remains the fastest way to:
test ideas
explore “what if” questions
model scenarios
communicate assumptions clearly
What has changed is not Excel—it is the burden placed on the human.
You no longer need to:
remember every formula
write VBA macros from scratch
search forums for error messages
AI already understands:
Excel formulas
Power Query
VBA (Visual Basic for Applications, Excel’s automation language)
You can say:
“Write an Excel model with inputs, calculations, and outputs for this scenario.”
AI will:
generate the formulas
structure the workbook cleanly
comment the logic
explain how it works
If something breaks:
AI reads the error message
explains why it occurred
fixes the formula or macro
Excel becomes what it was always meant to be: a thinking space, not a memory test.
SQL Server — The System of Record and Truth
SQL Server is where analysis becomes reliable, repeatable, and scalable.
It holds:
historical data (millions of records are routine)
structured dimensions
consistent definitions
auditable transformations
Here is the shift AI enables:
You do not need to be a syntax expert.
SQL (Structured Query Language) is something AI already understands deeply.
You can say:
“Create a SQL view that allocates indirect costs by service hours. Include validation queries.”
AI will:
write the SQL
optimize joins
add comments
generate test queries
flag edge cases
produce clear documentation
AI can also interpret SQL Server error messages, explain them in plain English, and rewrite the code correctly.
This removes one of the biggest barriers between finance and data systems.
SQL stops being “IT-only” and becomes a shared analytical language, with AI translating analytical intent into executable code.
Power BI — Where Decisions Happen
Power BI is the communication layer: dashboards, trends, drilldowns, and monitoring.
It relies on DAX (Data Analysis Expressions), the calculation language used by Power BI.
Here is the key reassurance:
AI already understands DAX extremely well.
DAX is:
rule-based
pattern-driven
language-like
This makes it ideal for AI assistance.
You do not need to memorize DAX syntax. You need to describe what you want.
For example:
“I want year-over-year change, rolling 12-month averages, and per-capita measures that respect slicers.”
AI can:
write the measures
explain filter context
fix common mistakes
refactor slow logic
document what each measure does
Power BI becomes less about struggling with formulas and more about designing the right questions.
3. AI as the Documentation Engine (Quietly Transformational)
Documentation is where most analytical systems decay.
Excel models with no explanation
SQL views nobody understands
Macros written years ago by someone who left
Reports that “work” but cannot be trusted
AI changes this completely.
SQL Documentation
AI can:
add inline comments to SQL queries
write plain-English descriptions of each view
explain table relationships
generate data dictionaries automatically
You can say:
“Document this SQL view so a new analyst understands it.”
And receive:
a clear narrative
assumptions spelled out
warnings about common mistakes
Excel & Macro Documentation
AI can:
explain what each worksheet does
document VBA macros line-by-line
generate user instructions
rewrite messy macros into cleaner, documented code
Recently, I had a powerful but stodgy Excel workbook with over 1.4 million formulas. AI read the entire file, explained the internal logic accurately, and rewrote the system in SQL with a few hundred well-documented lines—producing identical results.
Documentation stops being an afterthought. It becomes cheap, fast, and automatic.
4. AI as Debugger and Interpreter
One of AI’s most underrated strengths is error interpretation.
AI excels at:
reading cryptic error messages
identifying likely causes
suggesting fixes
explaining failures in plain language
You can copy-paste an error message without comment and say:
“Explain this error and fix the code.”
This applies to:
Excel formulas
VBA macros
SQL queries
Power BI refresh errors
DAX logic problems
Hours of frustration collapse into minutes.
5. What Humans Still Must Do (And Always Will)
AI is powerful—but it is not responsible for outcomes.
Humans must still:
define what words mean (“cost,” “revenue,” “allocation”)
set policy boundaries
decide what is reasonable
validate results
interpret implications
make decisions
The human role becomes:
director
creator
editor
judge
translator
AI does not replace judgment. It amplifies disciplined judgment.
6. Why This Matters Across the Organization
For Managers
Faster insight
Clearer explanations
Fewer “mystery numbers”
Greater confidence in decisions
For Finance Professionals
Less time fighting tools
More time on policy, tradeoffs, and risk
Stronger documentation and audit readiness
For IT Professionals
Cleaner specifications
Fewer misunderstandings
Better separation of logic and presentation
More maintainable systems
This is not a turf shift. It is a clarity shift.
7. The Real Skill Shift
The modern analyst does not need to:
memorize every function
master every syntax rule
become a full-time programmer
The modern analyst must:
ask clear questions
supply authoritative context
define constraints
validate outputs
communicate meaning
AI handles the rest.
Conclusion: Intelligence, Directed
Excel, SQL Server, and Power BI remain the backbone of serious analysis—not because they are trendy, but because they mirror how thinking, systems, and decisions actually work.
AI changes how we use them:
it reads the manuals
writes the code
documents the logic
fixes the errors
explains the results
Humans provide direction. AI provides execution.
Those who learn to work this way will not just be more efficient—they will be more credible, more influential, and more future-proof.
Appendix A
A Practical AI Prompt Library for Finance, Government, and Analytical Professionals
This appendix is meant to be used, not admired.
These prompts reflect how professionals actually work: with rules, constraints, audits, deadlines, and political consequences.
You are not asking AI to “be smart.” You are directing intelligence.
“Read the attached document in full. Treat it as authoritative. Summarize the structure, rules, definitions, exceptions, and dependencies. Do not add assumptions. I will confirm your understanding.”
Why this matters
Eliminates guessing
Aligns AI with your institutional reality
Prevents hallucinated rules
A.2 Excel Modeling Prompts
Scenario Model
“Design an Excel workbook with Inputs, Calculations, and Outputs tabs. Use named ranges. Include scenario toggles and validation checks that confirm totals tie out.”
Formula Debugging
“This Excel formula returns an error. Explain why, fix it, and rewrite it in a clearer form.”
Macro Creation
“Write a VBA macro that refreshes all data connections, recalculates, logs a timestamp, and alerts the user if validation checks fail. Comment every section.”
Documentation
“Explain this Excel workbook as if onboarding a new analyst. Describe what each worksheet does and how inputs flow to outputs.”
A.3 SQL Server Prompts
View Creation
“Create a SQL view that produces monthly totals by City and Department. Grain must be City-Month-Department. Exclude void transactions. Add comments and validation queries.”
Performance Refactor
“Refactor this SQL query for performance without changing results. Explain what you changed and why.”
Error Interpretation
“Here is a SQL Server error message. Explain it in plain English and fix the query.”
Documentation
“Document this SQL schema so a new analyst understands table purpose, keys, and relationships.”
A.4 Power BI / DAX Prompts
(DAX = Data Analysis Expressions, the calculation language used by Power BI — a language AI already understands deeply.)
Measure Creation
“Create DAX measures for Total Cost, Cost per Capita, Year-over-Year Change, and Rolling 12-Month Average. Explain filter context for each.”
Debugging
“This DAX measure returns incorrect results when filtered. Explain why and correct it.”
Model Review
“Review this Power BI data model and identify risks: ambiguous relationships, missing dimensions, or inconsistent grain.”
A.5 Validation & Audit Prompts
Validation Suite
“Create validation queries that confirm totals tie to source systems and flag variances greater than 0.1%.”
Audit Explanation
“Explain how this model produces its final numbers in language suitable for auditors.”
A.6 Training & Handoff Prompts
Training Guide
“Create a training guide for an internal analyst explaining how to refresh, validate, and extend this model safely.”
Institutional Memory
“Write a ‘how this system thinks’ document explaining design philosophy, assumptions, and known limitations.”
Key Principle
Good prompts don’t ask for brilliance. They provide clarity.
Appendix B
How to Validate AI-Generated Analysis Without Becoming Paranoid
AI does not eliminate validation. It raises the bar for it.
The danger is not trusting AI too much. The danger is trusting anything without discipline.
B.1 The Rule of Independent Confirmation
Every important number must:
tie to a known source, or
be independently recomputable
If it cannot be independently confirmed, it is not final.
B.2 Validation Layers (Use All of Them)
Layer 1 — Structural Validation
Correct grain (monthly vs annual)
No duplicate keys
Expected row counts
Layer 2 — Arithmetic Validation
Subtotals equal totals
Allocations sum to 100%
No unexplained residuals
Layer 3 — Reconciliation
Ties to GL, ACFR, payroll, ridership, etc.
Same totals across tools (Excel, SQL, Power BI)
Layer 4 — Reasonableness Tests
Per-capita values plausible?
Sudden jumps explainable?
Trends consistent with known events?
AI can help generate all four layers, but humans must decide what “reasonable” means.
B.3 The “Explain It Back” Test
One of the strongest validation techniques:
“Explain how this number was produced step by step.”
If the explanation:
is coherent
references known rules
matches expectations
You’re on solid ground.
If not, stop.
B.4 Change Detection
Always compare:
this month vs last month
current version vs prior version
Ask AI:
“Identify and explain every material change between these two outputs.”
This catches silent errors early.
B.5 What Validation Is Not
Validation is not:
blind trust
endless skepticism
redoing everything manually
Validation is structured confidence-building.
B.6 Why AI Helps Validation (Instead of Weakening It)
AI:
generates test queries quickly
explains failures clearly
documents expected behavior
flags anomalies humans may miss
AI doesn’t reduce rigor. It makes rigor affordable.
Appendix C
What Managers Should Ask For — and What They Should Stop Asking For
This appendix is for leaders.
Good management questions produce good systems. Bad questions produce busywork.
C.1 What Managers Should Ask For
“Show me the assumptions.”
If assumptions aren’t visible, the output isn’t trustworthy.
“How does this tie to official numbers?”
Every serious analysis must reconcile to something authoritative.
“What would change this conclusion?”
Good models reveal sensitivities, not just answers.
“How will this update next month?”
If refresh is manual or unclear, the model is fragile.
“Who can maintain this if you’re gone?”
This forces documentation and institutional ownership.
C.2 What Managers Should Stop Asking For
❌ “Just give me the number.”
Numbers without context are liabilities.
❌ “Can you do this quickly?”
Speed without clarity creates rework and mistrust.
❌ “Why can’t this be done in Excel?”
Excel is powerful—but it is not a system of record.
❌ “Can’t AI just do this automatically?”
AI accelerates work within rules. It does not invent governance.
C.3 The Best Managerial Question of All
“How confident should I be in this, and why?”
That question invites:
validation
explanation
humility
trust
It turns analysis into leadership support instead of technical theater.
Appendix D
Job Description: The Modern Analyst (0–3 Years Experience)
This job description reflects what an effective, durable analyst looks like today — not a unicorn, not a senior architect, and not a narrow technician.
This role assumes the analyst will work in an environment that uses Excel, SQL Server, Power BI, and AI tools as part of normal operations.
Position Title
Data / Financial / Business Analyst (Title may vary by organization)
Experience Level
Entry-level to 3 years of professional experience
Recent graduates encouraged to apply
Role Purpose
The Modern Analyst supports decision-making by:
transforming raw data into reliable information,
building repeatable analytical workflows,
documenting logic clearly,
and communicating results in ways leaders can trust.
This role is not about memorizing syntax or becoming a single-tool expert. It is about directing analytical tools — including AI — with clarity, discipline, and judgment.
Core Responsibilities
1. Analytical Thinking & Problem Framing
Translate business questions into analytical tasks
Clarify assumptions, definitions, and scope before analysis begins
Identify what data is needed and where it comes from
Ask follow-up questions when requirements are ambiguous
Build and maintain Power BI reports and dashboards
Use existing semantic models and measures
Create new measures using DAX (Data Analysis Expressions) with AI guidance
Ensure reports:
align with defined metrics
update reliably
are understandable to non-technical users
5. Documentation & Knowledge Transfer
Document:
Excel models
SQL queries
Power BI reports
Write explanations that allow another analyst to:
understand the logic
reproduce results
maintain the system
Use AI to accelerate documentation while ensuring accuracy
6. Validation & Quality Control
Reconcile outputs to authoritative sources
Identify anomalies and unexplained changes
Use validation checks rather than assumptions
Explain confidence levels and limitations clearly
7. Collaboration & Communication
Work with:
finance
operations
IT
management
Present findings clearly in plain language
Respond constructively to questions and challenges
Accept feedback and revise analysis as needed
Required Skills & Competencies
Analytical & Professional Skills
Curiosity and skepticism
Attention to detail
Comfort asking clarifying questions
Willingness to document work
Ability to explain complex ideas simply
Technical Skills (Baseline)
Excel (intermediate level or higher)
Basic SQL (SELECT, JOIN, GROUP BY)
Familiarity with Power BI or similar BI tools
Comfort using AI tools for coding, explanation, and documentation
Candidates are not expected to know everything on day one.
Preferred Qualifications
Degree in:
Finance
Accounting
Economics
Data Analytics
Information Systems
Engineering
Public Administration
Internship or project experience involving data analysis
Exposure to:
budgeting
forecasting
cost allocation
operational metrics
What Success Looks Like (First 12–18 Months)
A successful analyst in this role will be able to:
independently build and explain Excel models
write and validate SQL queries with AI assistance
maintain Power BI reports without breaking definitions
document their work clearly
flag issues early rather than hiding uncertainty
earn trust by being transparent and disciplined
What This Role Is Not
This role is not:
a pure programmer role
a dashboard-only role
a “press the button” reporting job
a role that values speed over accuracy
Why This Role Matters
Organizations increasingly fail not because they lack data, but because:
logic is undocumented
assumptions are hidden
systems are fragile
knowledge walks out the door
This role exists to prevent that.
Closing Note to Candidates
You do not need to be an expert in every tool.
You do need to:
think clearly,
communicate honestly,
learn continuously,
and use AI responsibly.
If you can do that, the tools will follow.
Appendix E
Interview Questions a Strong Analyst Should Ask
(And Why the Answers Matter)
This appendix is written for candidates — especially early-career analysts — who want to succeed, grow, and contribute meaningfully.
These are not technical questions. They are questions about whether the environment supports good analytical work.
A thoughtful organization will welcome these questions. An uncomfortable response is itself an answer.
1. Will I Have Timely Access to the Data I’m Expected to Analyze?
Why this matters
Analysts fail more often from lack of access than lack of ability.
If key datasets (such as utility billing, payroll, permitting, or ridership data) require long approval chains, partial access, or repeated manual requests, analysis stalls. Long delays force analysts to restart work cold, which is inefficient and demoralizing.
A healthy environment has:
clear data access rules,
predictable turnaround times,
and documented data sources.
2. Will I Be Able to Work in Focused Blocks of Time?
Why this matters
Analytical work requires concentration and continuity.
If an analyst’s day is fragmented by:
constant meetings,
urgent ad-hoc requests,
unrelated administrative tasks,
then even talented analysts struggle to make progress. Repeated interruptions over days or weeks force constant re-learning and increase error risk.
Strong teams protect at least some uninterrupted time for deep work.
3. How Often Are Priorities Changed Once Work Has Started?
Why this matters
Changing priorities is normal. Constant resets are not.
Frequent shifts without closure:
waste effort,
erode confidence,
and prevent analysts from seeing work through to completion.
A good environment allows:
exploratory work,
followed by stabilization,
followed by delivery.
Analysts grow fastest when they can complete full analytical cycles.
4. Will I Be Asked to Do Significant Work Outside the Role You’re Hiring Me For?
Why this matters
Early-career analysts often fail because they are overloaded with tasks unrelated to analysis:
ad-hoc administrative work,
manual data entry,
report formatting unrelated to insights,
acting as an informal IT support desk.
This dilutes skill development and leads to frustration.
A strong role respects analytical focus while allowing reasonable cross-functional exposure.
5. Where Will This Role Sit Organizationally?
Why this matters
Analysts thrive when they are close to:
decision-makers,
subject-matter experts,
and the business context.
Being housed in IT can be appropriate in some organizations, but analysts often succeed best when:
they are embedded in finance, operations, or planning,
with strong, cooperative support from IT, not ownership by IT.
Clear role placement reduces confusion about expectations and priorities.
6. What Kind of Support Will I Have from IT?
Why this matters
Analysts do not need IT to do their work for them — but they do need:
help with access,
guidance on standards,
and assistance when systems issues arise.
A healthy environment has:
defined IT support pathways,
mutual respect between analysts and IT,
and shared goals around data quality and security.
Adversarial or unclear relationships slow everyone down.
7. Will I Be Encouraged to Document My Work — and Given Time to Do So?
Why this matters
Documentation is often praised but rarely protected.
If analysts are rewarded only for speed and output, documentation becomes the first casualty. This creates fragile systems and makes handoffs painful.
Strong organizations:
value documentation,
allow time for it,
and recognize it as part of the job, not overhead.
8. How Will Success Be Measured in the First Year?
Why this matters
Vague success criteria create anxiety and misalignment.
A healthy answer includes:
skill development,
reliability,
learning the organization’s data,
and increasing independence over time.
Early-career analysts need space to learn without fear of being labeled “slow.”
9. What Happens When Data or Assumptions Are Unclear?
Why this matters
No dataset is perfect.
Analysts succeed when:
questions are welcomed,
assumptions are discussed openly,
and uncertainty is handled professionally.
An environment that discourages questions or punishes transparency leads to quiet errors and loss of trust.
10. Will I Be Allowed — and Encouraged — to Use Modern Tools Responsibly?
Why this matters
Analysts today learn and work using tools like:
Excel,
SQL,
Power BI,
and AI-assisted analysis.
If these tools are discouraged, restricted without explanation, or treated with suspicion, analysts are forced into inefficient workflows. In many cases, the latest versions with added features can prove better productivity. Is the organization more than 1-2 years behind in updating at the present time? What are the views of key players about AI?
Strong organizations focus on:
governance,
validation,
and responsible use — not blanket prohibition.
11. How Are Analytical Mistakes Handled?
Why this matters
Mistakes happen — especially while learning.
The question is whether the culture responds with:
learning and correction, or
blame and fear.
Analysts grow fastest in environments where:
mistakes are surfaced early,
corrected openly,
and used to improve systems.
12. Who Will I Learn From?
Why this matters
Early-career analysts need:
examples,
feedback,
and mentorship.
Even informal guidance matters.
A thoughtful answer shows the organization understands that analysts are developed, not simply hired.
Closing Note to Candidates
These questions are not confrontational. They are professional.
Organizations that welcome them are more likely to:
retain talent,
produce reliable analysis,
and build durable systems.
If an organization cannot answer these questions clearly, it does not mean it is a bad place — but it may not yet be a good place for an analyst to thrive.
Appendix F
A Necessary Truce: IT Control, Analyst Access, and the Role of Sandboxes
One of the most common — and understandable — tensions in modern organizations sits at the boundary between IT and analytical staff.
It usually sounds like this:
“We can’t let anyone outside IT touch live databases.”
On this point, IT is absolutely right.
Production systems exist to:
run payroll,
bill customers,
issue checks,
post transactions,
and protect sensitive information.
They must be:
stable,
secure,
auditable,
and minimally disturbed.
No serious analyst disputes this.
But here is the equally important follow-up question — one that often goes unspoken:
If analysts cannot access live systems, do they have access to a safe, current analytical environment instead?
Production Is Not the Same Thing as Analysis
The core misunderstanding is not about permission. It is about purpose.
Production systems are built to execute transactions correctly.
Analytical systems are built to understand what happened.
These are different jobs, and they should live in different places.
IT departments already understand this distinction in principle. The question is whether it has been implemented in practice.
The Case for Sandboxes and Analytical Mirrors
A well-run organization does not give analysts access to live transactional tables.
Instead, it provides:
read-only mirrors
overnight refreshes at a minimum
restricted, de-identified datasets
clearly defined analytical schemas
This is not radical. It is standard practice in mature organizations.
What a Sandbox Actually Is
A sandbox is:
a copy of production data,
refreshed on a schedule (often nightly),
isolated from operational systems,
and safe to explore without risk.
Analysts can:
query freely,
build models,
validate logic,
and document findings
…without the possibility of disrupting operations.
A Practical Example: Payroll and Personnel Data
Payroll is often cited as the most sensitive system — and rightly so.
But here is the practical reality:
Most analytical work does not require:
Social Security numbers
bank account details
wage garnishments
benefit elections
direct deposit instructions
What analysts do need are things like:
position counts
departments
job classifications
pay grades
hours worked
overtime
trends over time
A Payroll / Personnel sandbox can be created that:
mirrors the real payroll tables,
strips or masks protected fields,
replaces SSNs with surrogate keys,
removes fields irrelevant to analysis,
refreshes nightly from production
This allows analysts to answer questions such as:
How is staffing changing?
Where is overtime increasing?
What are vacancy trends?
How do personnel costs vary by department or function?
All without exposing sensitive personal data.
This is not a compromise of security. It is an application of data minimization, a core security principle.
Why This Matters More Than IT Realizes
When analysts lack access to safe, current analytical data, several predictable failures occur:
Analysts rely on stale exports
Logic is rebuilt repeatedly from scratch
Results drift from official numbers
Trust erodes between departments
Decision-makers get inconsistent answers
Ironically, over-restriction often increases risk, because:
people copy data locally,
spreadsheets proliferate,
and controls disappear entirely.
A well-designed sandbox reduces risk by centralizing access under governance.
What IT Is Right to Insist On
IT is correct to insist on:
no write access
no direct production access
strong role-based security
auditing and logging
clear ownership of schemas
documented refresh processes
None of that is negotiable.
But those safeguards are fully compatible with analyst access — if access is provided in the right environment.
What Analysts Are Reasonably Asking For
Analysts are not asking to:
run UPDATE statements on live tables
bypass security controls
access protected personal data
manage infrastructure
They are asking for:
timely access to analytical copies of data
predictable refresh schedules
stable schemas
and the ability to do their job without constant resets
That is a governance problem, not a personnel problem.
The Ideal Operating Model
In a healthy organization:
IT owns production systems
IT builds and governs analytical mirrors
Analysts work in sandboxes
Finance and operations define meaning
Validation ties analysis back to production totals
Everyone wins
This model:
protects systems,
protects data,
supports analysis,
and builds trust.
Why This Belongs in This Series
Earlier appendices described:
the skills of the modern analyst,
the questions analysts should ask,
and the environments that cause analysts to fail or succeed.
This appendix addresses a core environmental reality:
Analysts cannot succeed without access — and access does not require risk.
The solution is not fewer analysts or tighter gates. The solution is better separation between production and analysis.
A Final Word to IT, Finance, and Leadership
This is not an argument against IT control.
It is an argument for IT leadership.
The most effective IT departments are not those that say “no” most often — they are the ones that say:
“Here is the safe way to do this.”
Sandboxes, data warehouses, and analytical mirrors are not luxuries. They are the infrastructure that allows modern organizations to think clearly without breaking what already works.
Closing Note on the Appendices
These appendices complete the framework:
The main essay explains the stack
The follow-up explains how to direct AI
These appendices make it operational
Together, they describe not just how to use AI—but how to use it responsibly, professionally, and durably.
A technical framework for staffing, facilities, and cost projection
Abstract
In local government forecasting, population is the dominant driver of service demand, staffing requirements, facility needs, and operating costs. While no municipal system can be forecast with perfect precision, population-based models—when properly structured—produce estimates that are sufficiently accurate for planning, budgeting, and capital decision-making. Crucially, population growth in cities is not a sudden or unknowable event.
Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, population arrival is observable months or years in advance. This paper presents population not merely as a driver, but as a leading indicator, and demonstrates how cities can convert development approvals into staged population forecasts that support rational staffing, facility sizing, capital investment, and operating cost projections.
1. Introduction: Why population sits at the center
Local governments exist to provide services to people. Police protection, fire response, streets, parks, water, sanitation, administration, and regulatory oversight are all mechanisms for supporting a resident population and the activity it generates. While policy choices and service standards influence how services are delivered, the volume of demand originates with population.
Practitioners often summarize this reality informally:
“Tell me the population, and I can tell you roughly how many police officers you need. If I know the staff, I can estimate the size of the building. If I know the size, I can estimate the construction cost. If I know the size, I can estimate the electricity bill.”
This paper formalizes that intuition into a defensible forecasting framework and addresses a critical objection: population is often treated as uncertain or unknowable. In practice, population growth in cities is neither sudden nor mysterious—it is permitted into existence through public processes that unfold over years.
2. Population as a base driver, not a single-variable shortcut
Population does not explain every budget line, but it explains most recurring demand when paired with a small number of modifiers.
At its core, many municipal services follow this structure:
While individual events vary, aggregate demand scales with population.
3.2 Capacity, not consumption, drives budgets
Municipal budgets fund capacity, not just usage:
Staff must be available before calls occur
Facilities must exist before staff are hired
Vehicles and equipment must be in place before service delivery
Capacity decisions are inherently population-driven.
4. Population growth is observable before it arrives
A defining feature of local government forecasting—often underappreciated—is that population growth is authorized through public approvals long before residents appear in census or utility data.
Population does not “arrive”; it progresses through a pipeline.
5. The development pipeline as a population forecasting timeline
5.1 Annexation: strategic intent (years out)
Annexation establishes:
Jurisdictional responsibility
Long-term service obligations
Future land-use authority
While annexation does not create immediate population, it signals where population will eventually be allowed.
Forecast role:
Long-range horizon marker
Infrastructure and service envelope planning
Typical lead time: 3–10 years
5.2 Zoning: maximum theoretical population
Zoning converts land into entitled density.
From zoning alone, cities can estimate:
Maximum dwelling units
Maximum population at buildout
Long-run service ceilings
Zoning defines upper bounds, even if timing is uncertain.
Forecast role:
Long-range capacity planning
Useful for master plans and utility sizing
Typical lead time: 3–7 years
5.3 Preliminary plat: credible development intent
Preliminary plat approval signals:
Developer capital commitment
Defined lot counts
Identified phasing
Population estimates become quantifiable, even if delivery timing varies.
Forecast role:
Medium-high certainty population
First stage for phased population modeling
Typical lead time: 1–3 years
5.4 Final plat: scheduled population
Final plat approval:
Legally creates lots
Locks in density and configuration
Triggers infrastructure construction
Impact Fees & other costs are committed
At this point, population arrival is no longer speculative.
Once streets, utilities, and drainage are built, population arrival becomes physically constrained by construction schedules.
Forecast role:
Narrow timing window
Supports staffing lead-time decisions
Typical lead time: 6–18 months
5.6 Water meter connections: imminent occupancy
Water meters are one of the most reliable near-term indicators:
Each residential meter ≈ one household
Installations closely precede vertical construction
Forecast role:
Quarterly or monthly population forecasting
Just-in-time operational scaling
Typical lead time: 1–6 months
5.7 Certificates of Occupancy: population realized
Certificates of occupancy convert permitted population into actual population.
At this point:
Service demand begins immediately
Utility consumption appears
Forecasts can be validated
Forecast role:
Confirmation and calibration
Not prediction
6. Population forecasting as a confidence ladder
Development Stage
Population Certainty
Timing Precision
Planning Use
Annexation
Low
Very low
Strategic
Zoning
Low–Medium
Low
Capacity envelopes
Preliminary Plat
Medium
Medium
Phased planning
Final Plat
High
Medium–High
Budget & staffing
Infrastructure Built
Very High
High
Operational prep
Water Meters
Extremely High
Very High
Near-term ops
COs
Certain
Exact
Validation
Population forecasting in cities is therefore graduated, not binary.
7. From population to staffing
Once population arrival is staged, staffing can be forecast using service-specific ratios and fixed minimums.
7.1 Police example (illustrative ranges)
Sworn officers per 1,000 residents commonly stabilize within broad bands depending on service level and demand, also tied to known local ratios:
Lower demand: ~1.2–1.8
Moderate demand: ~1.8–2.4
High demand: ~2.4–3.5+
Civilian support staff often scale as a fraction of sworn staffing.
The appropriate structure is:Officers=αpolice+βpolice⋅Population
Where α accounts for minimum 24/7 coverage and supervision.
7.2 General government staffing
Administrative staffing scales with:
Population
Number of employees
Asset inventory
Transaction volume
A fixed core plus incremental per-capita growth captures this reality more accurately than pure ratios.
8. From staffing to facilities
Facilities are a function of:
Headcount
Service configuration
Security and public access needs
A practical planning method:Facility Size=FTE⋅Gross SF per FTE
Typical blended civic office planning ranges usually fall within:
~175–300 gross SF per employee
Specialized spaces (dispatch, evidence, fleet, courts) are layered on separately.
9. From facilities to capital and operating costs
9.1 Capital costs
Capital expansion costs are typically modeled as:Capex=Added SF⋅Cost per SF⋅(1+Soft Costs)
Where soft costs include design, permitting, contingencies, and escalation.
9.2 Operating costs
Facility operating costs scale predictably with size:
Electricity: kWh per SF per year
Maintenance: % of replacement value or $/SF
Custodial: $/SF
Lifecycle renewals
Electricity alone can be reasonably estimated as:Annual Cost=SF⋅kWh/SF⋅$/kWh
This is rarely exact—but it is directionally reliable.
10. Key modifiers that refine population models
Population alone is powerful but incomplete. High-quality forecasts adjust for:
Density and land use
Daytime population and employment
Demographics
Service standards
Productivity and technology
Geographic scale (lane miles, acres)
These modifiers refine, but do not replace, population as the base driver.
11. Why growth surprises cities anyway
When cities claim growth was “unexpected,” the issue is rarely lack of information. More often:
Development signals were not integrated into finance models
Staffing and capital planning lagged approvals
Fixed minimums were ignored
Threshold effects (new stations, expansions) were deferred too long
Growth that appears sudden is usually forecastable growth that was not operationalized.
12. Conclusion
Population is the primary driver of local government demand, but more importantly, it is a predictable driver. Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, cities possess a multi-year advance view of population arrival.
This makes it possible to:
Phase staffing rationally
Time facilities before overload
Align capital investment with demand
Improve credibility with councils, auditors, and rating agencies
In local government, population growth is not a surprise. It is a permitted, engineered, and scheduled outcome of public decisions. A forecasting system that treats population as both a driver and a leading indicator is not speculative—it is simply paying attention to the city’s own approvals.
Appendix A
Defensibility of Population-Driven Forecasting Models
A response framework for auditors, rating agencies, and governing bodies
Purpose of this appendix
This appendix addresses a common concern raised during budget reviews, audits, bond disclosures, and council deliberations:
“Population-based forecasts seem too simplistic or speculative.”
The purpose here is not to argue that population is the only factor affecting local government costs, but to demonstrate that population-driven forecasting—when anchored to development approvals and adjusted for service standards—is methodologically sound, observable, and conservative.
A.1 Population forecasting is not speculative in local government
A frequent misconception is that population forecasts rely on demographic projections or external estimates. In practice, this model relies primarily on the city’s own legally binding approvals.
Population growth enters the forecast only after it has passed through:
Annexation agreements
Zoning entitlements
Preliminary and final plats
Infrastructure construction
Utility connections
Certificates of occupancy
These are public, documented actions, not assumptions.
Key distinction for reviewers: This model does not ask “How fast might the city grow?” It asks “What growth has the city already approved, and when will it become occupied?”
A.2 Population is treated as a leading indicator, not a lagging one
Traditional population measures (census counts, ACS estimates) are lagging indicators. This model explicitly avoids relying on those for near-term forecasting.
Instead, it uses development milestones as leading indicators, each with increasing certainty and narrower timing windows.
For audit and disclosure purposes:
Early-stage entitlements affect only long-range capacity planning
Staffing and capital decisions are triggered only at later, high-certainty stages
Near-term operating impacts are tied to utility connections and COs
This layered approach prevents premature spending while avoiding reactive under-staffing.
A.3 Fixed minimums prevent over-projection in small or slow-growth cities
A common audit concern is that per-capita models overstate staffing needs.
This model explicitly separates:
Fixed baseline capacity (α)
Incremental population-driven capacity (β)
This structure:
Prevents unrealistic staffing increases in early growth stages
Operating costs scale predictably with assets and space.
The model is transparent, testable, and adjustable.
Therefore: A population-driven forecasting model of this type represents a prudent, defensible, and professionally reasonable approach to long-range municipal planning.
Appendix B
Consequences of Failing to Anticipate Population Growth
A diagnostic review of reactive municipal planning
Purpose of this appendix
This appendix describes common failure patterns observed in cities that do not systematically link development approvals to population, staffing, and facility planning. These outcomes are not the result of negligence or bad intent; they typically arise from fragmented information, short planning horizons, or the absence of an integrated forecasting framework.
The patterns described below are widely recognized in municipal practice and are offered to illustrate the practical risks of reactive planning.
B.1 “Surprise growth” that was not actually a surprise
A frequent narrative in reactive cities is that growth “arrived suddenly.” In most cases, the growth was visible years earlier through zoning approvals, plats, or utility extensions but was not translated into staffing or capital plans.
Common indicators:
Approved subdivisions not reflected in operating forecasts
Development tracked only by planning staff, not finance or operations
Population discussed only after occupancy
Consequences:
Budget shocks
Emergency staffing requests
Loss of credibility with governing bodies
B.2 Knee-jerk staffing reactions
When growth impacts become unavoidable, reactive cities often respond through hurried staffing actions.
Typical symptoms:
Mid-year supplemental staffing requests
Heavy reliance on overtime
Accelerated hiring without workforce planning
Training pipelines overwhelmed
Consequences:
Elevated labor costs
Increased burnout and turnover
Declining service quality during growth periods
Inefficient long-term staffing structures
B.3 Under-sizing followed by over-correction
Without forward planning, cities often alternate between two extremes:
Under-sizing due to conservative or delayed response
Over-sizing in reaction to service breakdowns
Examples:
Facilities built too small “to be safe”
Rapid expansions shortly after completion
Swing from staffing shortages to excess capacity
Consequences:
Higher lifecycle costs
Poor space utilization
Perception of waste or mismanagement
B.4 Obsolete facilities at the moment of completion
Facilities planned without reference to future population often open already constrained.
Common causes:
Planning based on current headcount only
Ignoring entitled but unoccupied development
Failure to include expansion capability
Consequences:
Expensive retrofits
Disrupted operations during expansion
Shortened facility useful life
This is one of the most costly errors because capital investments are long-lived and difficult to correct.
B.5 Deferred capital followed by crisis-driven spending
Reactive cities often delay capital investment until systems fail visibly.
Typical patterns:
Fire stations added only after response times degrade
Police facilities expanded only after overcrowding
Utilities upgraded only after service complaints
Consequences:
Emergency procurement
Higher construction costs
Increased debt stress
Lost opportunity for phased financing
B.6 Misalignment between departments
When population intelligence is not shared across departments:
Planning knows what is coming
Finance budgets based on current year
Operations discover impacts last
Consequences:
Conflicting narratives to council
Fragmented decision-making
Reduced trust between departments
Population-driven forecasting provides a common factual baseline.
B.7 Overreliance on lagging indicators
Reactive cities often rely heavily on:
Census updates
Utility consumption after occupancy
Service call increases
These indicators confirm growth after it has already strained capacity.
Consequences:
Persistent lag between demand and response
Structural understaffing
Continual “catch-up” budgeting
B.8 Political whiplash and credibility erosion
Unanticipated growth pressures often force councils into repeated difficult votes:
Emergency funding requests
Mid-year budget amendments
Rapid debt authorizations
Over time, this leads to:
Voter skepticism
Council fatigue
Reduced tolerance for legitimate future investments
Planning failures become governance failures.
B.9 Inefficient use of taxpayer dollars
Ironically, reactive planning often costs more, not less.
Cost drivers include:
Overtime premiums
Compressed construction schedules
Retrofit and rework costs
Higher borrowing costs due to rushed timing
Proactive planning spreads costs over time and reduces risk premiums.
B.10 Organizational stress and morale impacts
Staff experience growth pressures first.
Observed impacts:
Chronic overtime
Inadequate workspace
Equipment shortages
Frustration with leadership responsiveness
Over time, this contributes to:
Higher turnover
Loss of institutional knowledge
Reduced service consistency
B.11 Why these failures persist
These patterns are not caused by incompetence. They persist because:
Growth information is siloed
Forecasting is viewed as speculative
Political incentives favor short-term restraint
Capital planning horizons are too short
Absent a formal framework, cities default to reaction.
B.12 Summary for governing bodies
Cities that do not integrate development approvals into population-driven forecasting commonly experience:
Perceived “surprise” growth
Emergency staffing responses
Repeated under- and over-sizing
Facilities that age prematurely
Higher long-term costs
Organizational strain
Reduced public confidence
None of these outcomes are inevitable. They are symptoms of not using information the city already has.
B.13 Closing observation
The contrast between proactive and reactive cities is not one of optimism versus pessimism. It is a difference between:
Anticipation versus reaction
Sequencing versus scrambling
Planning versus explaining after the fact
Population-driven forecasting does not eliminate uncertainty. It replaces surprise with preparation.
Appendix C
Population Readiness & Forecasting Discipline Checklist
A self-assessment for proactive versus reactive cities
Purpose: This checklist allows a city to evaluate whether it is systematically anticipating population growth—or discovering it after impacts occur. It is designed for use by city management teams, finance directors, auditors, and governing bodies.
How to use: For each item, mark:
✅ Yes / In place
⚠️ Partially / Informal
❌ No / Not done
Patterns matter more than individual answers.
Section 1 — Visibility of Future Population
C-1 Do we maintain a consolidated list of annexed, zoned, and entitled land with estimated buildout population?
C-2 Are preliminary and final plats tracked in a format usable by finance and operations (not just planning)?
C-3 Do we estimate population by development phase, not just at full buildout?
C-4 Is there a documented method for converting lots or units into population (household size assumptions reviewed periodically)?
C-5 Do we distinguish between long-range potential growth and near-term probable growth?
Red flag: Population is discussed primarily in narrative terms (“fast growth,” “slowing growth”) rather than quantified and staged.
Section 2 — Timing and Lead Indicators
C-6 Do we identify which development milestone triggers planning action (e.g., preliminary plat vs final plat)?
C-7 Are infrastructure completion schedules incorporated into population timing assumptions?
C-8 Are water meter installations or equivalent utility connections tracked and forecasted?
C-9 Do we use certificates of occupancy to validate and recalibrate population forecasts annually?
C-10 Is population forecasting treated as a rolling forecast, not a once-per-year estimate?
Red flag: Population is updated only when census or ACS data is released.
Section 3 — Staffing Linkage
C-11 Does each major department have an identified population or workload driver?
C-12 Are fixed minimum staffing levels explicitly separated from growth-driven staffing?
C-13 Are staffing increases tied to forecasted population arrival, not service breakdowns?
C-14 Do hiring plans account for lead times (recruitment, academies, training)?
C-15 Can we explain recent staffing increases as either:
population growth, or
explicit policy/service-level changes?
Red flag: Staffing requests frequently cite “we are behind” without reference to forecasted growth.
Section 4 — Facilities and Capital Planning
C-16 Are facility size requirements derived from staffing projections, not current headcount?
C-17 Do capital plans include expansion thresholds (e.g., headcount or service load triggers)?
C-18 Are new facilities designed with future expansion capability?
C-19 Are entitled-but-unoccupied developments considered when evaluating future facility adequacy?
C-20 Do we avoid building facilities that are at or near capacity on opening day?
Red flag: Facilities require major expansion within a few years of completion.
Section 5 — Operating Cost Awareness
C-21 Are operating costs (utilities, maintenance, custodial) modeled as a function of facility size and assets?
C-22 Are utility cost impacts of expansion estimated before facilities are approved?
C-23 Do we understand how population growth affects indirect departments (HR, IT, finance)?
C-24 Are lifecycle replacement costs considered when adding capacity?
Red flag: Operating cost increases appear as “unavoidable surprises” after facilities open.
Section 6 — Cross-Department Integration
C-25 Do planning, finance, and operations use the same population assumptions?
C-26 Is growth discussed in joint meetings, not only within planning?
C-27 Does finance receive regular updates on development pipeline status?
C-28 Are growth assumptions documented and shared, not implicit or informal?
Red flag: Different departments give different growth narratives to council.
Section 7 — Governance and Transparency
C-29 Can we clearly explain to council why staffing or capital is needed before service failure occurs?
C-30 Are population-driven assumptions documented in budget books or CIP narratives?
C-31 Do we distinguish between:
growth-driven needs, and
discretionary service enhancements?
C-32 Can auditors or rating agencies trace growth-related decisions back to documented approvals?
Red flag: Growth explanations rely on urgency rather than evidence.
Section 8 — Validation and Learning
C-33 Do we compare forecasted population arrival to actual COs annually?
C-34 Are forecasting errors analyzed and corrected rather than ignored?
C-35 Do we adjust household size, absorption rates, or timing assumptions over time?
Red flag: Forecasts remain unchanged year after year despite clear deviations.
Scoring Interpretation (Optional)
Mostly ✅ → Proactive, anticipatory city
Mix of ✅ and ⚠️ → Partially planned, risk of reactive behavior
Many ❌ → Reactive city; growth will feel like a surprise
A city does not need perfect scores. The presence of structure, documentation, and sequencing is what matters.
Closing Note for Leadership
If a city can answer most of these questions affirmatively, it is not guessing about growth—it is managing it. If many answers are negative, the city is likely reacting to outcomes it had the power to anticipate.
Population growth does not cause planning problems. Ignoring known growth signals does.
Appendix D
Population-Driven Planning Maturity Model
A framework for assessing and improving municipal forecasting discipline
Purpose of this appendix
This maturity model describes how cities evolve in their ability to anticipate population growth and translate it into staffing, facility, and financial planning. It recognizes that most cities are not “good” or “bad” planners; they are simply at different stages of organizational maturity.
Each level builds logically on the prior one. Advancement does not require perfection—only structure, integration, and discipline.
Level 1 — Reactive City
“We didn’t see this coming.”
Characteristics
Population discussed only after impacts are felt
Reliance on census or anecdotal indicators
Growth described qualitatively (“exploding,” “slowing”)
Staffing added only after service failure
Capital projects triggered by visible overcrowding
Frequent mid-year budget amendments
Typical behaviors
Emergency staffing requests
Heavy overtime usage
Facilities opened already constrained
Surprise operating cost increases
Organizational mindset
Growth is treated as external and unpredictable.
Risks
Highest long-term cost
Lowest credibility with councils and rating agencies
Chronic organizational stress
Level 2 — Aware but Unintegrated City
“Planning knows growth is coming, but others don’t act on it.”
Characteristics
Development pipeline tracked by planning
Finance and operations not fully engaged
Growth acknowledged but not quantified in budgets
Capital planning still reactive
Limited documentation of assumptions
Typical behaviors
Late staffing responses despite known development
Facilities planned using current headcount
Disconnect between planning reports and budget narratives
Organizational mindset
Growth is known, but not operationalized.
Risks
Continued surprises
Internal frustration
Mixed messages to council
Level 3 — Structured Forecasting City
“We model growth, but execution lags.”
Characteristics
Population forecasts tied to development approvals
Preliminary staffing models exist
Fixed minimums recognized
Capital needs identified in advance
Forecasts updated annually
Typical behaviors
Better budget explanations
Improved CIP alignment
Still some late responses due to execution gaps
Organizational mindset
Growth is forecastable, but timing discipline is still developing.
Strengths
Credible analysis
Reduced emergencies
Clearer governance conversations
Level 4 — Integrated Planning City
“Approvals, staffing, and capital move together.”
Characteristics
Development pipeline drives population timing
Staffing plans phased to population arrival
Facility sizing based on projected headcount
Operating costs modeled from assets
Cross-department coordination is routine
Typical behaviors
Hiring planned ahead of demand
Facilities open with expansion capacity
Capital timed to avoid crisis spending
Clear audit trail from approvals to costs
Organizational mindset
Growth is managed, not reacted to.
Benefits
Stable service delivery during growth
Higher workforce morale
Strong credibility with governing bodies
Level 5 — Adaptive, Data-Driven City
“We learn, recalibrate, and optimize continuously.”
Characteristics
Rolling population forecasts
Development milestones tracked in near-real time
Annual validation against COs and utility data
Forecast errors analyzed and corrected
Scenario modeling for alternative growth paths
Typical behaviors
Minimal surprises
High confidence in long-range plans
Early identification of inflection points
Proactive communication with councils and investors
Organizational mindset
Growth is a controllable system, not a threat.
Benefits
Lowest lifecycle cost
Highest service reliability
Institutional resilience
Summary Table
Level
Description
Core Risk
1
Reactive
Crisis-driven decisions
2
Aware, unintegrated
Late responses
3
Structured
Execution lag
4
Integrated
Few surprises
5
Adaptive
Minimal risk
Key Insight
Most cities are not failing—they are stuck between Levels 2 and 3. The largest gains come not from sophisticated analytics, but from integration and timing discipline.
Progression does not require:
Perfect forecasts
Advanced software
Large consulting engagements
It requires:
Using approvals the city already grants
Sharing population assumptions across departments
Sequencing decisions intentionally
Closing Observation
Cities do not choose whether they grow. They choose whether growth feels like a surprise or a scheduled event.
A collaboration between Lewis McLain & AI (Suggested by Becky Brooks)
Here is a funny, light-hearted, non-offensive survey designed as if a city or organization created it, full of the same bureaucratic absurdity but tailored for someone who’s just spent a couple of weeks in jail.
It is intentionally ridiculous — the kind of tone-deaf survey a city might send, trying to measure the “customer experience.”
⸻
POST-INCARCERATION CUSTOMER SATISFACTION SURVEY
Because your feedback helps us improve the parts of the experience we had no intention of improving.
Thank you for recently spending 10–45 days with us!
Your stay matters to us, and we’d love your thoughts.
Please take 3–90 minutes to complete this survey.
⸻
SECTION 1 — OVERALL EXPERIENCE
1. How satisfied were you with your recent incarceration?
• ☐ Very Satisfied
• ☐ Satisfied
• ☐ Neutral (emotionally or spiritually)
• ☐ Dissatisfied
• ☐ Very Dissatisfied
• ☐ I would like to speak to the manager of jail, please
2. Would you recommend our facility to friends or family?
• ☐ Yes, absolutely
• ☐ Only if they deserve it
• ☐ No, but I might recommend it to my ex
3. Did your stay meet your expectations?
• ☐ It exceeded them, shockingly
• ☐ It met them, sadly
• ☐ What expectations?
• ☐ I didn’t expect any of this
⸻
SECTION 2 — ACCOMMODATIONS
4. How would you rate the comfort of your sleeping arrangements?
• ☐ Five stars (would book again on Expedia)
• ☐ Three stars (I’ve slept on worse couches)
• ☐ One star (my back may sue you)
• ☐ Zero stars (please never ask this again)
5. How would you describe room service?
• ☐ Prompt and professional
• ☐ Present
• ☐ Sporadic
• ☐ I was unaware room service was an option
• ☐ Wait… was that what breakfast was supposed to be?
⸻
SECTION 3 — DINING EXPERIENCE
6. Rate the culinary artistry of our meals:
• ☐ Michelin-worthy
• ☐ Edible with effort
• ☐ Mysterious but survivable
• ☐ I have questions that science cannot answer
7. Did you enjoy the variety of menu options?
• ☐ Yes
• ☐ No
• ☐ I’m still not sure if Tuesday’s entrée was food
⸻
SECTION 4 — PROGRAMMING & ACTIVITIES
8. Which of the following activities did you participate in?
• ☐ Walking in circles
• ☐ Sitting
• ☐ Thinking about life
• ☐ Thinking about lunch
• ☐ Wondering why time moves slower in here
• ☐ Other (please describe your spiritual journey): ___________
9. Did your stay include any unexpected opportunities for personal growth?
• ☐ Learned patience
• ☐ Learned humility
• ☐ Learned the legal system very quickly
• ☐ Learned I never want to fill out this survey again
⸻
SECTION 5 — CUSTOMER SERVICE
10. How would you rate the friendliness of staff?
• ☐ Surprisingly pleasant
• ☐ Professionally indifferent
• ☐ “Move over there” was said with warmth
• ☐ I think they liked me
• ☐ I think they didn’t
11. Did staff answer your questions in a timely manner?
• ☐ Yes
• ☐ No
• ☐ I’m still waiting
• ☐ I learned not to ask questions
⸻
SECTION 6 — RELEASE PROCESS
12. How smooth was your release experience?
• ☐ Smooth
• ☐ Mostly smooth
• ☐ Bumpy
• ☐ Like trying to exit a maze blindfolded
13. Upon release, did you feel ready to re-enter society?
• ☐ Yes, I am reborn
• ☐ Somewhat
• ☐ Not at all
• ☐ Please define “ready”
⸻
SECTION 7 — FINAL COMMENTS
14. If you could change one thing about your stay, what would it be?
(Please choose only one):
• ☐ The walls
• ☐ The food
• ☐ The schedule
• ☐ The length of stay
• ☐ All of the above
• ☐ I decline to answer on advice of counsel
15. Additional feedback for management:
⸻
⸻
(Comments will be carefully reviewed by someone someday.)
⸻
Thank You!
Your answers will be used to improve future guest experiences,*
A collaboration between Lewis McLain & AI A long answer to a short question from Tuesday Morning Men’s Bible Study
“Granddad… my faith is slipping.”
“Granddad, can I tell you something and you won’t think less of me? I feel like my faith in God is slipping away. I’ve prayed—truly prayed—for our family to heal, for hearts to soften, for conversations about the Lord to open again. These aren’t selfish prayers. They’re for relationships to be mended, for love to return, for estrangements to disappear.
But nothing changes. Some hearts grow colder. And any mention of God shuts everything down.
Why doesn’t God answer these good prayers? Why is He silent when the need is so great? I don’t want to lose my faith, Granddad… but I don’t know how much more silence or tension I can take.”
**THE GRANDFATHER’S ANSWER:
A Loving Reassurance About the Awakening—The Kairos Moment God Has Appointed**
Come here, child. Sit beside me. I want to tell you something about God’s timing, something Scripture calls kairos—the appointed moment, the perfectly chosen hour when God reaches the heart in a way no human effort ever could.
Before any other story, let’s start with the one Jesus Himself told.
THE PRODIGAL SON: THE PATTERN OF ALL AWAKENINGS
(Luke 15:11–24)
A young man demands his inheritance, leaves home, and wastes everything in reckless living (vv. 12–13). When famine comes, he takes the lowest job imaginable—feeding pigs—and even longs to eat their food (vv. 14–16).
Then comes the sentence that describes every true spiritual awakening:
“But when he came to himself…” (Luke 15:17)
That is the kairos moment.
What exactly happened in that moment?
Reality shattered illusion. He saw his condition honestly for the first time.
Memory returned. He remembered his father’s goodness.
Identity stirred. He realized, “This is not who I am.”
Hope flickered. “My father’s servants have bread enough…”
The will turned. “I will arise and go to my father.” (v. 18)
Notice something important:
No one persuaded him.
No sermon reached him.
No family member argued with him.
No timeline pressured him.
His awakening came when the Father’s timing made his heart ready.
The father in the story doesn’t chase him into the far country. He waits. He watches. He trusts the process of grace.
And “while he was still a long way off,” the father sees him and runs (v. 20).
Why this matters for your prayers:
You’re praying for the very thing Jesus describes here. But the awakening of a heart—any heart—comes as God’s gift, in God’s hour, through God’s patient love.
The Prodigal Son shows us: God can change a life in a single moment. But He decides when that moment arrives.
This is the foundation. Now let me walk you through the other stories that prove this pattern again and again.
1. Jacob at Peniel — The Wrestling That Revealed His True Self
(Genesis 32:22–32)
Jacob spent years relying on himself. But his heart did not change— not through blessings, not through hardship, not through distance.
Only when God wrestled him in the night and touched his hip (v. 25) did Jacob awaken.
This was his kairos:
When his strength failed, his faith was born.
He limped away, but walked new— with a new name, a new identity, and a new dependence on God.
2. Nebuchadnezzar — One Glance That Restored His Sanity
(Daniel 4:28–37)
After years of pride, exile, and madness, his turning point wasn’t long or gradual. It happened in one second:
“I lifted my eyes to heaven, and my sanity was restored.” (Dan. 4:34)
The moment he looked up was the moment God broke through.
Kairos is when God uses a single upward glance to undo years of blindness.
3. Jonah — The Awakening in the Deep
(Jonah 2)
Jonah ran from God’s call until he reached the bottom of the sea. Only there, trapped in the fish, did Scripture say:
“When my life was fainting away, I remembered the LORD.” (Jonah 2:7)
That remembering? That was kairos.
When every escape ended, God opened his eyes.
4. David — Truth Striking in One Sentence
(2 Samuel 12; Psalm 51)
Nathan’s story awakened what months of hidden sin could not. When Nathan said, “You are the man” (2 Sam. 12:7), David’s heart broke open.
He went from blindness to confession instantly:
“I have sinned against the LORD.” (v. 13)
Psalm 51 pours out the repentance birthed in that moment.
Kairos often comes through truth spoken at the one moment God knows the heart can receive it.
5. Peter — The Rooster’s Cry and Jesus’ Look
(Luke 22:54–62)
After Peter’s third denial, Scripture says:
“The Lord turned and looked at Peter.” (v. 61)
That look shattered Peter’s fear and self-deception.
He went out and wept bitterly— not because he was condemned, but because he was awakened.
Kairos can be a look, a memory, a sound—something only God can time.
6. Saul — A Heart Reversed on the Damascus Road
(Acts 9:1–19)
Saul was not softening. He was escalating.
But Jesus met him at the crossroads and asked:
“Why are you persecuting Me?” (v. 4)
That question was a divine appointment—the moment Saul’s life reversed direction forever.
Kairos is when Jesus interrupts a story we thought was going one way and writes a new one.
7. What All These Stories Teach About Kairos Moments
Across all Scripture, kairos moments share the same attributes:
1. They are God-timed.
We cannot rush them. (Ecclesiastes 3:11)
2. They are God-initiated.
Awakenings are born of revelation, not persuasion. (John 6:44)
3. They break through illusion and restore reality.
“Coming to himself” means the heart finally sees truth. (Luke 15:17)
4. They lead to movement toward God.
Every awakening ends with a step homeward.
Your prayers are not being ignored. They are being gathered into the moment God is preparing.
8. Why This Matters for Your Family
You are praying for softened hearts, restored relationships, spiritual awakening. Those are kairos prayers, not chronos prayers.
Chronos is slow. Kairos is sudden.
Chronos waits. Kairos transforms.
You can’t see it yet, but God is preparing:
circumstances
conversations
memories
encounters
turning points
just like the father of the prodigal knew that hunger, hardship, and reflection would eventually lead his son home.
The father didn’t lose hope. He didn’t chase the son into the far country. He trusted that God’s timing would bring his child to the awakening moment.
You must do the same.
**9. Take Courage, Sweetheart:
The God Who Awakened Prodigals Will Awaken Hearts Again**
The Prodigal Son’s turning point didn’t look like a miracle. It looked like ordinary hunger.
David’s looked like a story. Peter’s looked like a rooster. Saul’s looked like a question. Nebuchadnezzar’s looked like a glance. Jonah’s looked like despair. Jacob’s looked like a limp.
Kairos moments rarely look divine at first. But they are.
And when God moves, hearts—no matter how hard—can turn in a single breath.
Don’t lose faith, child. The silence is not God’s absence. It is God’s preparation.
And when your family’s kairos moment comes, you will say what the father in Jesus’ story said:
“This my child was dead, and is alive again; was lost, and is found.” (Luke 15:24)
Until then, hold on. Your prayers are planting seeds that God will awaken in His perfect time.
For more than fifty years, Texas has been at the center of American redistricting law. Few states have produced as many major Supreme Court decisions shaping the meaning of the Voting Rights Act, the boundaries of racial gerrymandering doctrine, and—perhaps most significantly—the Court’s modern unwillingness to police partisan gerrymandering.
Two cases define the modern era for Texas: LULAC v. Perry (2006) and Abbott v. Perez (2018). Together, they reveal how the Court analyzes racial vote dilution, when partisan motives are permissible, how intent is inferred or rejected, and what evidentiary burdens challengers must meet.
At the heart of the Court’s reasoning is a recurring tension:
the Constitution forbids racial discrimination in redistricting,
the Voting Rights Act prohibits plans that diminish minority voting strength,
but the Court has repeatedly held that partisan advantage, even aggressive partisan advantage, is not generally unconstitutional.
Texas’s maps have allowed the Court to articulate, refine, and—many argue—narrow these doctrines.
I. LULAC v. Perry (2006): Partisan Motives Allowed, But Minority Vote Dilution Not
Background
In 2003, after winning unified control of state government, Texas Republicans enacted a mid-decade congressional redistricting plan replacing the court-drawn map used in 2002. It was an openly partisan effort to convert a congressional delegation that had favored Democrats into a Republican-leaning one.
Challengers argued:
The mid-decade redistricting itself was unconstitutional.
The legislature’s partisan intent violated the Equal Protection Clause.
The plan diluted Latino voting strength in violation of Section 2 of the Voting Rights Act, particularly in old District 23.
Several districts were racial gerrymanders, subordinating race to politics.
Arguments Before the Court
Challengers:
Texas had engaged in unprecedented partisan manipulation lacking a legitimate state purpose.
The dismantling of Latino opportunity districts—especially District 23—reduced the community’s ability to elect its preferred candidate.
Race was used as a tool to achieve partisan ends, in violation of Shaw v. Reno-line racial gerrymandering rules.
Texas:
Nothing in the Constitution forbids mid-decade redistricting.
Political gerrymandering, even when aggressive and obvious, was allowed under Davis v. Bandemer (1986).
Latino voters in District 23 were not “cohesive” enough to qualify for Section 2 protection.
District configurations reflected permissible political considerations.
The Court’s Decision
The Court’s ruling was a fractured opinion, but several clear conclusions emerged.
1. Mid-Decade Redistricting Is Constitutional
The Court held that states are not restricted to once-a-decade redistricting. Nothing in the Constitution or federal statute bars legislatures from replacing a map mid-cycle. This effectively legitimized Texas’s overtly partisan decision to redraw the map simply because political control had shifted.
The Court again declined to articulate a manageable standard for judging partisan gerrymandering. Justice Kennedy, writing for the controlling plurality, expressed concern about severe partisan abuses but concluded that no judicially administrable rule existed.
Key takeaway: Texas’s partisan motivation, even if blatant, was not itself unconstitutional.
3. Section 2 Violation in District 23: Latino Voting Strength Was Illegally Diluted
This was the major substantive ruling.
The Court found that Texas dismantled an existing Latino opportunity district (CD-23) precisely because Latino voters were on the verge of electing their preferred candidate. The legislature:
removed tens of thousands of cohesive Latino voters from the district,
replaced them with low-turnout Latino populations less likely to vote against the incumbent,
and justified the move under the guise of creating a new Latino-majority district elsewhere.
This manipulation, the Court held, denied Latino voters an equal opportunity to elect their candidate of choice, violating Section 2.
4. Racial Gerrymandering Claims Mostly Fail
The Court rejected most Shaw-type racial gerrymandering claims because plaintiffs failed to prove that race, rather than politics, predominated. This reflects a theme that becomes even stronger in later cases: when race and politics correlate—as they often do in Texas—challengers must provide powerful evidence that race, not party, drove the lines.
II. Abbott v. Perez (2018): A High Bar for Proving Discriminatory Intent
Background
After the 2010 census, Texas enacted new maps. A federal district court found that several districts were intentionally discriminatory and ordered Texas to adopt interim maps. In 2013, Texas then enacted maps that were largely identical to the court’s own interim maps.
Challengers argued that:
The original 2011 maps were passed with discriminatory intent.
The 2013 maps, though based on the court’s design, continued to embody the taint of 2011.
Multiple districts across Texas diluted minority voting strength or were racial gerrymanders.
Texas argued that:
The 2013 maps were valid because they were largely adopted from a court-approved version.
Any discriminatory intent from 2011 could not be imputed to the 2013 legislature.
Plaintiffs bore the burden of proving intentional discrimination district by district.
The Court’s Decision
In a 5–4 ruling, the Supreme Court reversed almost all findings of discriminatory intent against Texas.
1. Burden of Proof Is on Challengers, Not the State
The Court rejected the lower court’s presumption that Texas acted with discriminatory intent in 2013 merely because the 2011 legislature had been found to do so.
Key Holding: A finding of discriminatory intent in a prior map does not shift the burden; challengers must prove new intent for each new plan.
This significantly tightened the evidentiary bar.
2. Presumption of Legislative Good Faith
Chief Justice Roberts emphasized a longstanding principle:
Legislatures are entitled to a presumption of good faith unless challengers provide direct and persuasive evidence otherwise.
This presumption made it much harder to prove racial discrimination unless emails, testimony, or map-drawing files showed explicit racial motives.
Challengers failed to show that minority voters were both cohesive and systematically defeated by white bloc voting in many districts. The Court stressed the need for:
clear demographic evidence,
consistent voting patterns,
and demonstration of feasible alternative districts.
4. Only One District Violated the Constitution
The Court affirmed discrimination in Texas House District 90, where the legislature had intentionally moved Latino voters to achieve a specific racial composition.
But the Court rejected violations in every other challenged district.
5. Practical Effect: Courts Must Defer Unless Evidence Is Unusually Strong
Abbott v. Perez is widely viewed as one of the strongest modern statements of judicial deference to legislatures in redistricting—even when past discrimination has been found.
Justice Sotomayor’s dissent called the majority opinion “astonishing in its blindness.”
III. What These Cases Together Mean: Why the Court Upheld Texas’s Maps
Across both LULAC (2006) and Abbott (2018), a coherent theme emerges in the Supreme Court’s reasoning:
1. Partisan Gerrymandering Is Not the Court’s Job to Police
Unless partisan advantage clearly crosses into racial targeting, the Court will not strike it down. Texas repeatedly argued political motives, and the Court repeatedly accepted them as legitimate.
2. Racial Discrimination Must Be Proven With Specific, District-Level Evidence
Plaintiffs must demonstrate that race—not politics—predominated.
Correlation between race and partisanship is not enough.
Evidence must address each district individually.
3. Legislatures Receive a Strong Presumption of Good Faith
Abbott v. Perez reaffirmed that courts should not infer intent from
LULAC (2006) found a violation only because evidence clearly showed cohesive Latino voters whose electoral progress was intentionally undermined.
5. Courts Avoid Intruding into “Political Questions”
The Court has repeatedly signaled reluctance to take over the political process. This culminated in Rucho v. Common Cause (2019), where the Court held partisan gerrymandering claims categorically non-justiciable—a rule entirely consistent with how Texas cases were decided.
Conclusion: Why Texas Keeps Winning
Texas’s redistricting cases illustrate how the Supreme Court draws a sharp—and highly consequential—line:
Racial discrimination is unconstitutional, but must be proven with very specific evidence.
Partisan manipulation, even extreme manipulation, is permissible.
Courts defer heavily to state legislatures unless plaintiffs can clearly show that lawmakers used race as a tool, not merely politics.
In LULAC, challengers succeeded only where the evidence of racial vote dilution was unmistakable. In Abbott v. Perez, they failed everywhere except one district because intent was not proven with the level of granularity the Court demanded.
The result is that Texas has repeatedly prevailed in redistricting litigation—not necessarily because its maps are racially neutral, but because the Court has set an unusually high bar for proving racial motive and has washed its hands of partisan claims altogether.
You must be logged in to post a comment.