Public tragedies have a way of collapsing time. Old debates are reopened as if they were never had. Long-standing policies are treated as provisional. And political reflexes reassert themselves with a familiar urgency: something must be done, and whatever is done must be fast, visible, and legislative.
A recent Reuters report describing a mass shooting at a beachside gathering in Australia illustrates this pattern with uncomfortable clarity. The event itself was horrifying. The response was predictable. Within hours, political leaders were discussing emergency parliamentary sessions, tightening gun licensing laws, and revisiting a firearm regime that has been in place for nearly three decades.
What makes this episode especially instructive is not that it occurred in Australia, but that it occurred despite Australia’s reputation for having among the strictest gun control laws in the world. The country’s post-1996 framework—created in the wake of the Port Arthur massacre—has long been cited internationally as a model of decisive legislative action. Yet here, after decades of regulation, registration, licensing, and oversight, the instinctive answer remains the same: more law.
This essay treats the Australian response not as an anomaly, but as a continuation—and confirmation—of two arguments I have made previously: one concerning mass shootings as a systems failure rather than a purely legal failure, and another concerning what I have called “one-page laws”—the belief that complex social problems can be solved by concise statutes and urgent press conferences.
The Reuters Story, Paraphrased
According to Reuters, a deadly shooting at a public gathering in Bondi shocked Australians and immediately raised questions about whether the country’s long-standing firearms regime remains adequate. One of the suspects reportedly held a legal gun license and was authorized to own multiple firearms. In response, state and federal officials suggested that parliament might be recalled to consider reforms, including changes to license duration, suitability assessments, and firearm ownership limits.
The article notes that while Australia’s gun laws dramatically reduced firearm deaths after 1996, the number of legally owned guns has since risen to levels exceeding those prior to the reforms. Advocates argue that this growth, combined with modern risks, requires updated legislation. Political leaders signaled openness to acting quickly.
What the article does not do—and what most post-tragedy coverage does not do—is explain precisely how additional laws would have prevented this specific act, or how such laws would be meaningfully enforced without expanding surveillance, discretion, or intrusion into everyday life.
That omission is not accidental. It reflects a deeper habit in public governance.
The First Essay Revisited: Mass Shootings as Systems Failures
In my earlier essay on mass shootings, I argued that these events are rarely the result of a single legal gap. Instead, they emerge from systemic breakdowns: failures of detection, communication, intervention, and follow-through. Warning signs often exist. Signals are missed, dismissed, or siloed. Institutions act sequentially rather than collectively.
The presence or absence of one additional statute does little to alter those dynamics.
The Australian case reinforces this point. The suspect was not operating in a legal vacuum. The system already required licensing, registration, and approval. The breakdown did not occur because the law was silent; it occurred because law is only one input into a much larger human system.
When tragedy strikes, however, it is far easier to amend a statute than to admit that prevention depends on imperfect human judgment, social cohesion, mental health systems, community reporting, and inter-agency coordination. Laws are tangible. Systems are messy.
The Second Essay Revisited: The Illusion of One-Page Laws
My essay on one-page laws addressed a related but broader problem: the temptation to treat legislation as a substitute for governance.
One-page laws share several characteristics:
They are easy to describe.
They signal moral seriousness.
They create the appearance of action.
They externalize complexity.
The harder questions—Who enforces this? How often? With what discretion? At what cost? With what error rate?—are deferred or ignored.
The Australian response fits this pattern precisely. Proposals to shorten license durations or tighten suitability standards sound decisive, but they conceal the real burden: reviewing thousands of existing licenses, detecting future risk in people who have not yet exhibited it, and doing so without violating basic principles of fairness or due process.
The law can authorize action. It cannot supply foresight.
Where the Two Essays Converge
Taken together, these two arguments point to a shared conclusion: legislation is often mistaken for resolution.
Mass violence is not primarily a legislative failure; it is a detection and intervention failure. One-page laws feel comforting because they compress complexity into moral clarity. But compression is not the same as control.
Australia’s experience underscores a difficult truth: once a society has implemented baseline restrictions, further legislative tightening produces diminishing returns. The remaining risk lies not in legal gaps, but in human unpredictability. Eliminating that last fraction of risk would require levels of monitoring and preemption that most free societies rightly reject.
This is the trade-off no emergency session of parliament wants to articulate.
Why the Reflex Persists
The rush to legislate after tragedy is not irrational—it is political. Laws are visible acts of leadership. They reassure the public that order is being restored. Admitting that not every horror can be prevented without dismantling civil society is a harder message to deliver.
But honesty matters.
Governance is not the art of passing laws; it is the discipline of building systems that function under stress. When tragedy is followed immediately by legislative theater, it risks substituting symbolism for substance and urgency for effectiveness.
Conclusion
The Bondi shooting is not evidence that Australia’s gun laws have failed in some absolute sense. Nor is it proof that further legislation will succeed. What it is is a case study—one that reinforces two prior conclusions:
First, that mass violence persists even in highly regulated environments because it arises from human systems, not statutory voids.
Second, that one-page laws offer emotional relief but rarely operational solutions.
Serious problems deserve serious thinking. Not every response can be reduced to a bill number and a headline. And not every tragedy has a legislative cure.
The real challenge is resisting the comforting illusion that lawmaking alone is governance—and doing the slower, quieter, less visible work of strengthening the systems that stand between instability and catastrophe.
A collaboration between Lewis McLain, Paul Grimes & AI (Idea prompted by Paul Grimes, City Manager of McKinney)
Urban Theory Meets the New Texas Growth Regime
I. Why cities experience service growth faster than population
Cities rarely experience growth as a smooth, proportional process. Long before population numbers appear alarming, residents begin to sense strain: longer response times, crowded facilities, rising calls for service, and increasing friction in public space. The discrepancy between modest population growth and outsized service demand has been observed across cities and eras, and it has produced a deep body of urban theory seeking to explain why cities behave this way.
Across disciplines, a shared conclusion emerges: density increases interaction, and interaction accelerates outcomes. These outcomes include innovation, productivity, and cultural vitality—but also conflict, disorder, and service demand. What varies among theorists is not the mechanism itself, but how cities can shape, moderate, or absorb its consequences.
II. Geoffrey West and the mathematics of acceleration
Geoffrey West’s contribution is foundational because it removes morality, politics, and culture from the initial explanation. Cities, in his framework, are not collections of individuals; they are networks. As networks grow denser, the number of possible interactions grows faster than the number of nodes. This produces superlinear scaling in many urban outputs. When population doubles, certain outcomes more than double.
Crucially, West shows that the same mathematical logic governs both positive and negative outcomes. Innovation and GDP rise superlinearly; so do some forms of crime, disease transmission, and social friction. The implication is unsettling but clarifying: cities are social accelerators by design. Service demand tied to interaction will often grow faster than population, not because governance has failed, but because the underlying structure makes it inevitable.
West assumes, however, that cities respond to acceleration by reinventing themselves—upgrading systems, redesigning institutions, and continuously adapting. That assumption becomes important later.
III. Jane Jacobs and the conditions that turn density into order
Jane Jacobs does not dispute that density increases interaction. Her work asks a different question: what kind of interaction?
Jacobs argues that dense places can be remarkably safe and resilient when they are mixed-use, human-scaled, and continuously occupied. Her concept of “eyes on the street” is not sentimental; it is a theory of informal governance. In healthy neighborhoods, constant presence creates passive supervision. People notice deviations. Streets regulate themselves long before police are required.
But Jacobs is equally clear about the failure mode. Density without diversity—large single-use developments, commuter-only corridors, or isolated residential blocks—removes the stabilizing feedback loops. Interaction still increases, but it becomes episodic, anonymous, and harder to regulate informally. In those conditions, service demand rises sharply.
Jacobs therefore reframes West’s mathematics: density raises interaction; urban form determines whether interaction stabilizes or combusts.
IV. Sampson and the social capacity to absorb friction
Robert Sampson’s work further refines the picture by introducing collective efficacy—the capacity of a community to maintain order through shared norms and willingness to intervene. His research demonstrates that dense or disadvantaged neighborhoods do not inevitably experience high crime. Where social cohesion is strong and institutions are trusted, communities suppress disorder even under pressure.
This matters because it shows that service demand is not driven by density alone. Two areas with similar physical form can generate radically different workloads depending on stability, tenure, turnover, and informal social control. For forecasting, Sampson’s insight is critical: interaction becomes costly when social capacity erodes.
V. Glaeser, incentives, and why density keeps happening
Edward Glaeser explains why density persists despite its costs. Proximity is economically powerful. Dense cities match labor and opportunity more efficiently, transmit knowledge faster, and generate wealth. These benefits accrue quickly and privately, while the costs—service strain, infrastructure wear, social friction—arrive later and publicly.
This asymmetry explains why development pressure is relentless and why political systems often favor growth even when local governments struggle to keep up. Density is not an accident; it is the predictable outcome of incentives embedded in land markets and regional economies.
VI. Scott and the danger of simplified governance
James C. Scott provides the warning. Governments, he argues, tend to simplify complex systems into legible categories because they are easier to manage. But cities function through local variation, informal practices, and spatial nuance. When governance relies too heavily on abstract averages—per-capita ratios, citywide forecasts—it often misses where strain actually emerges.
Service demand concentrates in places, not evenly among people. This is why cities often feel stressed long before the spreadsheets confirm it.
VII. The missing assumption: cities control the form of their own growth
Despite their differences, these thinkers share a quiet assumption: the city experiencing density also has authority over that density. West assumes institutional reinvention is possible. Jacobs assumes local control over land use and street design. Sampson assumes neighborhoods evolve within a municipal framework. Glaeser assumes prosperity helps fund adaptation. Scott assumes the state has power, even if it misuses it.
That assumption no longer reliably holds in Texas.
VIII. The Texas legislative shift: density without authority
Over the past decade, Texas has steadily constrained municipal authority over annexation and extraterritorial jurisdiction while expanding developer freedom. Growth has not slowed; it has been redirected. Increasingly, large, dense developments are built outside city limits, beyond zoning authority, and often beyond meaningful density control.
Yet interaction does not stop at the city line. Residents of these developments commute through cities, use city roads, access city amenities, and generate service demand that cities are often contractually or practically compelled to address. The result is a new condition: density without authority.
This interrupts the thinkers’ chain of logic. Interaction still accelerates. Service demand still rises. But the city’s ability to shape the form, timing, and integration of growth is weakened. Institutional adaptation becomes reactive rather than formative.
IX. Houston and the path North Texas is now taking
This pattern is not new statewide. The Houston region has long operated under fragmented governance: cities, counties, MUDs, and special districts collectively producing urban form without a single coordinating authority. Houston’s growth model has always relied on externalized infrastructure finance and delayed incorporation.
North Texas historically followed a different path. Cities like McKinney and Plano grew through annexation, internalized infrastructure, and municipal sequencing. Density, services, and revenue were aligned.
Texas policy has changed that trajectory. North Texas is being pushed toward a Houston-style future—not by local choice, but by legal structure.
X. Aging: the force that converts today’s growth into tomorrow’s strain
Growth does not remain new. Aging is the force that locks in consequences.
A city dominated by 0–5 year-old apartments is operationally different from the same city thirty years later. As housing stock ages, rents soften, tenant turnover increases, maintenance is deferred, and informal adaptations emerge. The same density produces more service demand over time. Homeowners turning to renters are two different types of censuses in the same city.
Infrastructure ages alongside housing. Systems built in growth waves fail in cohorts. Maintenance demands converge. Replacement cycles collide with operating budgets. Even if population stabilizes, service pressure intensifies.
Aging transforms density from an abstract risk into a concrete workload.
XI. Schools as the clearest signal of the lifecycle mismatch
School closures—such as those experienced by McKinney ISD and many other Texas districts—are not isolated education issues. They are urban lifecycle signals.
When cities are young:
Family formation is high
Enrollment grows
Schools are built quickly
As housing ages:
Household size shrinks
Families age in place
Single-family homes convert to rentals
Multifamily units turn over rapidly
Student yield per unit declines
At the same time, infrastructure and neighborhoods age, and service demand rises elsewhere. Police calls, code enforcement, and social services grow even as schools empty. This is the paradox many Texas cities now face: closing schools in growing cities.
School closures therefore mark the transition from growth-driven demand to aging-driven demand. They reveal that population alone no longer explains service needs.
XII. The compounding effect of ETJ growth and aging
ETJ-driven development postpones this reckoning but does not prevent it. New developments outside city limits age just as surely as those inside. When they do, cities face a delayed shock: aging neighborhoods and infrastructure they did not shape, often without full fiscal integration. New growth in the ETJ require new local schools. Capacity of old schools cannot be absorbed by new growth as buses and distances act as a constraint.
Houston has lived with this reality for decades. North Texas is entering it now.
XIII. Conclusion: a new urban regime
The urban theorists remain correct about density, interaction, and acceleration. What Texas has altered is the governing environment in which those forces play out. Annexation limits and ETJ erosion do not stop growth. They delay accountability. Aging ensures that delay is temporary.
For cities like McKinney, the future is not simply more growth, nor even more density. It is a shift toward a fragmented, aging, interaction-heavy urban form—one that increasingly resembles Houston’s long-standing condition rather than North Texas’s historical model.
Understanding this arc—density → interaction → aging → service strain, under diminished local control—is essential before any discussion of elasticity, finance, or sustainability can be honest. Great thinkers are rethinking!
Appendix A
Key Thinkers, Publications, and Intellectual Contributions Referenced in This Essay
This appendix summarizes the principal authors and works referenced in the essay Density, Interaction, Aging, and the Fracturing of Local Control. Each has influenced modern thinking about cities, growth, density, governance, and service demand. The summaries below are intentionally descriptive rather than argumentative.
Geoffrey West
Primary Works
Scale: The Universal Laws of Growth, Innovation, Sustainability, and the Pace of Life in Organisms, Cities, Economies, and Companies (2017)
Bettencourt, L. M. A., et al., “Growth, innovation, scaling, and the pace of life in cities,” Proceedings of the National Academy of Sciences (PNAS)
Core Contribution West applies principles from physics and network theory to biological and social systems. His work demonstrates that many urban outputs—economic production, innovation, and certain social pathologies—scale superlinearly with population because cities function as dense interaction networks. His framework explains why some service demands grow faster than population and why cities must continually adapt to accelerating pressures.
Relevance to the Essay Provides the mathematical foundation for understanding why interaction-driven services (public safety, emergency response, enforcement) often outpace population growth.
Jane Jacobs
Primary Works
The Death and Life of Great American Cities (1961)
Core Contribution Jacobs challenges top-down planning and argues that healthy cities depend on mixed uses, short blocks, human-scale design, and continuous street activity. Her concept of “eyes on the street” explains how informal social control stabilizes dense environments.
Relevance to the Essay Explains why density does not automatically produce disorder and how urban form determines whether interaction becomes self-regulating or service-intensive.
Robert J. Sampson
Primary Works
Great American City: Chicago and the Enduring Neighborhood Effect (2012)
Sampson, Raudenbush, and Earls, “Neighborhoods and Violent Crime: A Multilevel Study of Collective Efficacy,” Science (1997)
Core Contribution Sampson introduces the concept of collective efficacy—the ability of communities to maintain order through shared norms and informal intervention. His work demonstrates that social cohesion and neighborhood stability can suppress disorder independent of density.
Relevance to the Essay Provides the social mechanism explaining why similar densities can produce very different service demands over time.
Edward Glaeser
Primary Works
Triumph of the City (2011)
Core Contribution Glaeser emphasizes the economic benefits of density, arguing that cities exist because proximity increases productivity, innovation, and opportunity. He frames density as an economic choice driven by incentives rather than a planning failure.
Relevance to the Essay Explains why growth pressure persists despite service strain and why development tends to outpace municipal capacity to respond.
James C. Scott
Primary Works
Seeing Like a State: How Certain Schemes to Improve the Human Condition Have Failed (1998)
Core Contribution Scott critiques centralized planning and “legibility”—the tendency of governments to simplify complex systems into administratively convenient categories. He shows how ignoring local knowledge and spatial nuance often produces unintended consequences.
Relevance to the Essay Warns against overreliance on citywide averages and per-capita metrics in forecasting service demand.
Crime Concentration and Place-Based Policing
Key Authors
David Weisburd
Anthony Braga
Representative Works
Weisburd, “The Law of Crime Concentration at Places,” Criminology
Braga et al., studies on hot-spots policing
Core Contribution Demonstrates that crime and disorder are highly concentrated in small geographic areas rather than evenly distributed across populations.
Relevance to the Essay Supports the argument that service demand accelerates spatially and perceptually before it appears in aggregate population statistics.
Urban Economics and Land-Use Structure
Additional Influential Works
Alain Bertaud, Order Without Design (2018)
Donald Shoup, The High Cost of Free Parking (2005)
Core Contributions Bertaud emphasizes cities as labor markets shaped by land constraints rather than plans. Shoup demonstrates how parking policy distorts density, travel behavior, and land use.
Relevance to the Essay Provide supporting context for how policy choices shape interaction patterns and service demand indirectly.
Houston-Region Governance and Fragmentation
Institutions and Research
Rice University Kinder Institute for Urban Research
Texas A&M Real Estate Research Center
Core Contribution Document the long-standing use of special districts, MUDs, and fragmented governance structures in the Houston region and their implications for infrastructure, service delivery, and long-term municipal responsibility.
Relevance to the Essay Establish Houston as a precedent for the fragmented growth model North Texas is increasingly approaching.
Texas Local Government and Annexation Policy
Statutory Context
Texas Local Government Code, Chapter 42 (Extraterritorial Jurisdiction)
Legislative reforms including SB 6 (2017), HB 347 (2019), and SB 2038 (2023)
Core Contribution These changes constrain municipal annexation and weaken ETJ authority, altering the alignment between growth, governance, and service responsibility.
Relevance to the Essay Provide the legal backdrop for the “density without authority” condition described.
School District Demographic and Facility Trends
Contextual Sources
Texas Education Agency (TEA) enrollment data
District-level facility planning and consolidation reports (e.g., MISD and peer districts)
Core Contribution School closures and consolidations reflect long-term demographic shifts, housing lifecycle effects, and declining student yield in aging neighborhoods.
Relevance to the Essay Serve as a visible indicator of urban aging and lifecycle mismatch in growing cities.
Closing Note on Use
This appendix is intended to clarify intellectual provenance, not to prescribe policy positions. The essay draws from multiple disciplines—physics, sociology, economics, planning, and public administration—to explain why modern cities experience accelerating service demand under changing governance conditions.
LFM Note: My personal circle of great thinkers leaves me always yearning for more time to visit with them. Lunch with Paul Grimes always takes a deeper probe than I am expecting. A visit with David Leininger always expands my knowledge and surprises me with more than just nuances to improve my vocabulary and vision. Dan Johnson considers me one of his mentors, but he thinks so far above and ahead as he describes his way of thinking with facts mixed with a tinge of Greek mythology. Even a short visit with Dan clarifies who the real mentor is. Our conversations start off with energy and end up with us feeding off each other like two little kids making a discovery. Don Paschal has been a friend and colleague for the longest and is full of experience, wisdom, but with a refreshing biblical integration. Becky Brooks is one of my closest colleagues and like a sister in sync with common vision and analyses. There are more. But I must stop here. LFM
Few moments in ancient literature capture the moral courage required to speak truth to power as vividly as the encounter between the prophet Nathan and King David. The scene is brief, almost understated, yet it exposes a problem as old as authority itself: what happens when power no longer hears the truth.
David, at this point in the biblical story, is not a fragile leader. He is Israel’s greatest king—military hero, national symbol, and political success. His reign is stable. His enemies are subdued. His legitimacy is unquestioned. That success, however, has begun to insulate him from accountability.¹
The Bible does not soften what happens next, and it is worth telling plainly.
What David Did
One evening, David notices a woman bathing from the roof of his palace. He learns she is married to one of his own soldiers, a man currently fighting on the front lines. David summons her anyway. As king, his request carries force whether spoken gently or not. She becomes pregnant.²
David now faces exposure. Instead of confessing, he attempts to manage the situation. He recalls the husband from battle, hoping circumstances will hide the truth. When that fails, David escalates. He sends the man back to war carrying a sealed message to the commanding general—an order placing him where the fighting is fiercest and support will be withdrawn.³
The man is killed.
The machinery of power functions smoothly. No inquiry follows. David marries the widow. From the outside, the matter disappears. Politically, the problem is solved. Morally, it has only been buried.
This is the danger Scripture names without hesitation: power does not merely enable wrongdoing; it can normalize it.
Why Nathan Matters
Nathan enters the story not as a revolutionary or rival, but as a prophet—someone whose authority comes from obedience to God rather than proximity to the throne. He is not part of David’s chain of command. He does not benefit from David’s favor. That independence is everything.⁴
Nathan does not accuse David directly. Instead, he tells a story.
He describes two men in a town. One is rich, with vast flocks. The other is poor, possessing only a single lamb—so cherished it eats at his table and sleeps in his arms. When a guest arrives, the rich man does not draw from his abundance. He takes the poor man’s lamb instead.⁵
David is outraged. As king, he pronounces judgment swiftly and confidently. The man deserves punishment. Restitution. Consequences.
Then Nathan speaks the words that collapse the distance between story and reality:
**“You are the man.”**⁶
In an instant, David realizes he has judged himself. Nathan names the facts plainly: David used his power to take what was not his, destroyed a loyal man to conceal it, and assumed his position placed him beyond accountability.
This is not a trap meant to humiliate. It is truth delivered with precision. Nathan allows David’s own moral instincts—still intact beneath layers of authority—to render the verdict.
Speaking Truth to Power Is Dangerous
Nathan’s courage should not be underestimated. Kings do not respond kindly to exposure. Many prophets were imprisoned or killed for far less. Nathan risks his position, his safety, and possibly his life. He cannot know how David will react. Faithfulness here is not measured by outcome but by obedience.⁷
Speaking truth to power is rarely loud. It is rarely celebrated. It requires proximity without dependence, clarity without cruelty, and courage without illusion. Nathan does not shout from outside the palace gates. He walks directly into the seat of power and speaks.
David’s response is remarkable precisely because it is not guaranteed:
*“I have sinned against the Lord.”*⁸
Repentance does not erase consequences. Nathan makes that clear. Forgiveness and accountability coexist. The Bible refuses to confuse mercy with immunity.⁹
Why This Story Still Matters
This encounter reveals something essential about power: authority tends to surround itself with affirmation and silence. Over time, wrongdoing becomes justified, then invisible. Institutions close ranks. Loyalty replaces truth. Image replaces integrity.
Nathan represents the indispensable outsider—the one who loves truth more than access and justice more than comfort. He does not seek to destroy David. He seeks to save him from becoming a king who can no longer hear.
Scripture does not present leaders as villains by default. It presents them as dangerous precisely because they are human. Power magnifies both virtue and vice. Without truth, it corrodes.¹⁰
The Broken Hallelujah
This is where Leonard Cohen’s Hallelujah belongs—not as ornament, but as interpretation.
The song opens with David’s musical gift, his calling, his nearness to God:
“Now I’ve heard there was a secret chord That David played, and it pleased the Lord…”
But Cohen does not linger there. He moves quickly to the roof, the bath, the fall:
“You saw her bathing on the roof Her beauty and the moonlight overthrew you.”
Cohen refuses to romanticize David any more than Nathan does. He understands that David’s story is not primarily about victory, but about collapse and confession. And he understands something many listeners miss: praise spoken after exposure cannot sound the same as praise spoken before it.
That is why the refrain matters:
“It’s a broken hallelujah.”
A cheap hallelujah is easy—praise without truth, worship without repentance, confidence without cost. It thrives where power is affirmed but never confronted.¹¹
A broken hallelujah is what remains when illusion is stripped away. It is praise that has passed through judgment. It is faith no longer dependent on image, position, or success. It is what David offers in Psalm 51, after Nathan leaves and the consequences remain.¹²
Nathan does not end David’s worship. He saves it from becoming hollow.
For Our Time
Nathan’s story is not ancient trivia. It is a permanent challenge.
Every generation builds systems that reward silence and discourage dissent—governments, corporations, churches, universities, families. Power still resists accountability. Truth still carries a cost. And praise without honesty still rings empty.
Speaking truth to power does not guarantee reform. It guarantees integrity.
Nathan spoke. David listened. And centuries later, a songwriter captured what that moment sounds like from the inside—not triumphant, not resolved, but honest.
Not every hallelujah is joyful. Some are whispered. Some are broken. And those may be the ones worth hearing most.
Scripture References & Notes
David’s power and success: 2 Samuel 5–10
Bathsheba episode begins: 2 Samuel 11:1–5
Uriah’s death order: 2 Samuel 11:14–17
Nathan as prophet to David: 2 Samuel 7; 2 Samuel 12
Nathan’s parable: 2 Samuel 12:1–4
“You are the man”: 2 Samuel 12:7
Prophetic risk: cf. 1 Kings 18; Jeremiah 20:1–2
David’s confession: 2 Samuel 12:13
Consequences despite forgiveness: 2 Samuel 12:10–14
Power and accountability theme: Proverbs 29:2; Psalm 82
Now I’ve heard there was a secret chord That David played, and it pleased the Lord But you don’t really care for music, do you? It goes like this, the fourth, the fifth The minor falls, the major lifts The baffled king composing Hallelujah
Hallelujah, Hallelujah Hallelujah, Hallelujah
Your faith was strong but you needed proof You saw her bathing on the roof Her beauty and the moonlight overthrew you She tied you to a kitchen chair She broke your throne, and she cut your hair And from your lips she drew the Hallelujah
Hallelujah, Hallelujah Hallelujah, Hallelujah
You say I took the name in vain I don’t even know the name But if I did, well, really, what’s it to you? There’s a blaze of light in every word It doesn’t matter which you heard The holy or the broken Hallelujah
Hallelujah, Hallelujah Hallelujah, Hallelujah
I did my best, it wasn’t much I couldn’t feel, so I tried to touch I’ve told the truth, I didn’t come to fool you And even though it all went wrong I’ll stand before the Lord of Song With nothing on my tongue but Hallelujah
Excel, SQL Server, Power BI — With AI Doing the Heavy Lifting
A collaboration between Lewis McLain & AI
Introduction: The Skill That Now Matters Most
The most important analytical skill today is no longer memorizing syntax, mastering a single tool, or becoming a narrow specialist.
The must-have skill is knowing how to direct intelligence.
In practice, that means combining:
Excel for thinking, modeling, and scenarios
SQL Server for structure, scale, and truth
Power BI for communication and decision-making
AI as the teacher, coder, documenter, and debugger
This is not about replacing people with AI. It is about finally separating what humans are best at from what machines are best at—and letting each do their job.
1. Stop Explaining. Start Supplying.
One of the biggest mistakes people make with AI is trying to explain complex systems to it in conversation.
That is backward.
The Better Approach
If your organization has:
an 80-page budget manual
a cost allocation policy
a grant compliance guide
a financial procedures handbook
even the City Charter
Do not summarize it for AI. Give AI the document.
Then say:
“Read this entire manual. Summarize it back to me in 3–5 pages so I can confirm your understanding.”
This is where AI excels.
AI is extraordinarily good at:
absorbing long, dense documents
identifying structure and hierarchy
extracting rules, exceptions, and dependencies
restating complex material in plain language
Once AI demonstrates understanding, you can say:
“Assume this manual governs how we budget. Based on that understanding, design a new feature that…”
From that point on, AI is no longer guessing. It is operating within your rules.
This is the fundamental shift:
Humans provide authoritative context
AI provides execution, extension, and suggested next steps
You will see this principle repeated throughout this post and the appendices—because everything else builds on it.
2. The Stack Still Matters (But for Different Reasons Now)
AI does not eliminate the need for Excel, SQL Server, or Power BI. It makes them far more powerful—and far more accessible.
Excel — The Thinking and Scenario Environment
Excel remains the fastest way to:
test ideas
explore “what if” questions
model scenarios
communicate assumptions clearly
What has changed is not Excel—it is the burden placed on the human.
You no longer need to:
remember every formula
write VBA macros from scratch
search forums for error messages
AI already understands:
Excel formulas
Power Query
VBA (Visual Basic for Applications, Excel’s automation language)
You can say:
“Write an Excel model with inputs, calculations, and outputs for this scenario.”
AI will:
generate the formulas
structure the workbook cleanly
comment the logic
explain how it works
If something breaks:
AI reads the error message
explains why it occurred
fixes the formula or macro
Excel becomes what it was always meant to be: a thinking space, not a memory test.
SQL Server — The System of Record and Truth
SQL Server is where analysis becomes reliable, repeatable, and scalable.
It holds:
historical data (millions of records are routine)
structured dimensions
consistent definitions
auditable transformations
Here is the shift AI enables:
You do not need to be a syntax expert.
SQL (Structured Query Language) is something AI already understands deeply.
You can say:
“Create a SQL view that allocates indirect costs by service hours. Include validation queries.”
AI will:
write the SQL
optimize joins
add comments
generate test queries
flag edge cases
produce clear documentation
AI can also interpret SQL Server error messages, explain them in plain English, and rewrite the code correctly.
This removes one of the biggest barriers between finance and data systems.
SQL stops being “IT-only” and becomes a shared analytical language, with AI translating analytical intent into executable code.
Power BI — Where Decisions Happen
Power BI is the communication layer: dashboards, trends, drilldowns, and monitoring.
It relies on DAX (Data Analysis Expressions), the calculation language used by Power BI.
Here is the key reassurance:
AI already understands DAX extremely well.
DAX is:
rule-based
pattern-driven
language-like
This makes it ideal for AI assistance.
You do not need to memorize DAX syntax. You need to describe what you want.
For example:
“I want year-over-year change, rolling 12-month averages, and per-capita measures that respect slicers.”
AI can:
write the measures
explain filter context
fix common mistakes
refactor slow logic
document what each measure does
Power BI becomes less about struggling with formulas and more about designing the right questions.
3. AI as the Documentation Engine (Quietly Transformational)
Documentation is where most analytical systems decay.
Excel models with no explanation
SQL views nobody understands
Macros written years ago by someone who left
Reports that “work” but cannot be trusted
AI changes this completely.
SQL Documentation
AI can:
add inline comments to SQL queries
write plain-English descriptions of each view
explain table relationships
generate data dictionaries automatically
You can say:
“Document this SQL view so a new analyst understands it.”
And receive:
a clear narrative
assumptions spelled out
warnings about common mistakes
Excel & Macro Documentation
AI can:
explain what each worksheet does
document VBA macros line-by-line
generate user instructions
rewrite messy macros into cleaner, documented code
Recently, I had a powerful but stodgy Excel workbook with over 1.4 million formulas. AI read the entire file, explained the internal logic accurately, and rewrote the system in SQL with a few hundred well-documented lines—producing identical results.
Documentation stops being an afterthought. It becomes cheap, fast, and automatic.
4. AI as Debugger and Interpreter
One of AI’s most underrated strengths is error interpretation.
AI excels at:
reading cryptic error messages
identifying likely causes
suggesting fixes
explaining failures in plain language
You can copy-paste an error message without comment and say:
“Explain this error and fix the code.”
This applies to:
Excel formulas
VBA macros
SQL queries
Power BI refresh errors
DAX logic problems
Hours of frustration collapse into minutes.
5. What Humans Still Must Do (And Always Will)
AI is powerful—but it is not responsible for outcomes.
Humans must still:
define what words mean (“cost,” “revenue,” “allocation”)
set policy boundaries
decide what is reasonable
validate results
interpret implications
make decisions
The human role becomes:
director
creator
editor
judge
translator
AI does not replace judgment. It amplifies disciplined judgment.
6. Why This Matters Across the Organization
For Managers
Faster insight
Clearer explanations
Fewer “mystery numbers”
Greater confidence in decisions
For Finance Professionals
Less time fighting tools
More time on policy, tradeoffs, and risk
Stronger documentation and audit readiness
For IT Professionals
Cleaner specifications
Fewer misunderstandings
Better separation of logic and presentation
More maintainable systems
This is not a turf shift. It is a clarity shift.
7. The Real Skill Shift
The modern analyst does not need to:
memorize every function
master every syntax rule
become a full-time programmer
The modern analyst must:
ask clear questions
supply authoritative context
define constraints
validate outputs
communicate meaning
AI handles the rest.
Conclusion: Intelligence, Directed
Excel, SQL Server, and Power BI remain the backbone of serious analysis—not because they are trendy, but because they mirror how thinking, systems, and decisions actually work.
AI changes how we use them:
it reads the manuals
writes the code
documents the logic
fixes the errors
explains the results
Humans provide direction. AI provides execution.
Those who learn to work this way will not just be more efficient—they will be more credible, more influential, and more future-proof.
Appendix A
A Practical AI Prompt Library for Finance, Government, and Analytical Professionals
This appendix is meant to be used, not admired.
These prompts reflect how professionals actually work: with rules, constraints, audits, deadlines, and political consequences.
You are not asking AI to “be smart.” You are directing intelligence.
“Read the attached document in full. Treat it as authoritative. Summarize the structure, rules, definitions, exceptions, and dependencies. Do not add assumptions. I will confirm your understanding.”
Why this matters
Eliminates guessing
Aligns AI with your institutional reality
Prevents hallucinated rules
A.2 Excel Modeling Prompts
Scenario Model
“Design an Excel workbook with Inputs, Calculations, and Outputs tabs. Use named ranges. Include scenario toggles and validation checks that confirm totals tie out.”
Formula Debugging
“This Excel formula returns an error. Explain why, fix it, and rewrite it in a clearer form.”
Macro Creation
“Write a VBA macro that refreshes all data connections, recalculates, logs a timestamp, and alerts the user if validation checks fail. Comment every section.”
Documentation
“Explain this Excel workbook as if onboarding a new analyst. Describe what each worksheet does and how inputs flow to outputs.”
A.3 SQL Server Prompts
View Creation
“Create a SQL view that produces monthly totals by City and Department. Grain must be City-Month-Department. Exclude void transactions. Add comments and validation queries.”
Performance Refactor
“Refactor this SQL query for performance without changing results. Explain what you changed and why.”
Error Interpretation
“Here is a SQL Server error message. Explain it in plain English and fix the query.”
Documentation
“Document this SQL schema so a new analyst understands table purpose, keys, and relationships.”
A.4 Power BI / DAX Prompts
(DAX = Data Analysis Expressions, the calculation language used by Power BI — a language AI already understands deeply.)
Measure Creation
“Create DAX measures for Total Cost, Cost per Capita, Year-over-Year Change, and Rolling 12-Month Average. Explain filter context for each.”
Debugging
“This DAX measure returns incorrect results when filtered. Explain why and correct it.”
Model Review
“Review this Power BI data model and identify risks: ambiguous relationships, missing dimensions, or inconsistent grain.”
A.5 Validation & Audit Prompts
Validation Suite
“Create validation queries that confirm totals tie to source systems and flag variances greater than 0.1%.”
Audit Explanation
“Explain how this model produces its final numbers in language suitable for auditors.”
A.6 Training & Handoff Prompts
Training Guide
“Create a training guide for an internal analyst explaining how to refresh, validate, and extend this model safely.”
Institutional Memory
“Write a ‘how this system thinks’ document explaining design philosophy, assumptions, and known limitations.”
Key Principle
Good prompts don’t ask for brilliance. They provide clarity.
Appendix B
How to Validate AI-Generated Analysis Without Becoming Paranoid
AI does not eliminate validation. It raises the bar for it.
The danger is not trusting AI too much. The danger is trusting anything without discipline.
B.1 The Rule of Independent Confirmation
Every important number must:
tie to a known source, or
be independently recomputable
If it cannot be independently confirmed, it is not final.
B.2 Validation Layers (Use All of Them)
Layer 1 — Structural Validation
Correct grain (monthly vs annual)
No duplicate keys
Expected row counts
Layer 2 — Arithmetic Validation
Subtotals equal totals
Allocations sum to 100%
No unexplained residuals
Layer 3 — Reconciliation
Ties to GL, ACFR, payroll, ridership, etc.
Same totals across tools (Excel, SQL, Power BI)
Layer 4 — Reasonableness Tests
Per-capita values plausible?
Sudden jumps explainable?
Trends consistent with known events?
AI can help generate all four layers, but humans must decide what “reasonable” means.
B.3 The “Explain It Back” Test
One of the strongest validation techniques:
“Explain how this number was produced step by step.”
If the explanation:
is coherent
references known rules
matches expectations
You’re on solid ground.
If not, stop.
B.4 Change Detection
Always compare:
this month vs last month
current version vs prior version
Ask AI:
“Identify and explain every material change between these two outputs.”
This catches silent errors early.
B.5 What Validation Is Not
Validation is not:
blind trust
endless skepticism
redoing everything manually
Validation is structured confidence-building.
B.6 Why AI Helps Validation (Instead of Weakening It)
AI:
generates test queries quickly
explains failures clearly
documents expected behavior
flags anomalies humans may miss
AI doesn’t reduce rigor. It makes rigor affordable.
Appendix C
What Managers Should Ask For — and What They Should Stop Asking For
This appendix is for leaders.
Good management questions produce good systems. Bad questions produce busywork.
C.1 What Managers Should Ask For
“Show me the assumptions.”
If assumptions aren’t visible, the output isn’t trustworthy.
“How does this tie to official numbers?”
Every serious analysis must reconcile to something authoritative.
“What would change this conclusion?”
Good models reveal sensitivities, not just answers.
“How will this update next month?”
If refresh is manual or unclear, the model is fragile.
“Who can maintain this if you’re gone?”
This forces documentation and institutional ownership.
C.2 What Managers Should Stop Asking For
❌ “Just give me the number.”
Numbers without context are liabilities.
❌ “Can you do this quickly?”
Speed without clarity creates rework and mistrust.
❌ “Why can’t this be done in Excel?”
Excel is powerful—but it is not a system of record.
❌ “Can’t AI just do this automatically?”
AI accelerates work within rules. It does not invent governance.
C.3 The Best Managerial Question of All
“How confident should I be in this, and why?”
That question invites:
validation
explanation
humility
trust
It turns analysis into leadership support instead of technical theater.
Appendix D
Job Description: The Modern Analyst (0–3 Years Experience)
This job description reflects what an effective, durable analyst looks like today — not a unicorn, not a senior architect, and not a narrow technician.
This role assumes the analyst will work in an environment that uses Excel, SQL Server, Power BI, and AI tools as part of normal operations.
Position Title
Data / Financial / Business Analyst (Title may vary by organization)
Experience Level
Entry-level to 3 years of professional experience
Recent graduates encouraged to apply
Role Purpose
The Modern Analyst supports decision-making by:
transforming raw data into reliable information,
building repeatable analytical workflows,
documenting logic clearly,
and communicating results in ways leaders can trust.
This role is not about memorizing syntax or becoming a single-tool expert. It is about directing analytical tools — including AI — with clarity, discipline, and judgment.
Core Responsibilities
1. Analytical Thinking & Problem Framing
Translate business questions into analytical tasks
Clarify assumptions, definitions, and scope before analysis begins
Identify what data is needed and where it comes from
Ask follow-up questions when requirements are ambiguous
Build and maintain Power BI reports and dashboards
Use existing semantic models and measures
Create new measures using DAX (Data Analysis Expressions) with AI guidance
Ensure reports:
align with defined metrics
update reliably
are understandable to non-technical users
5. Documentation & Knowledge Transfer
Document:
Excel models
SQL queries
Power BI reports
Write explanations that allow another analyst to:
understand the logic
reproduce results
maintain the system
Use AI to accelerate documentation while ensuring accuracy
6. Validation & Quality Control
Reconcile outputs to authoritative sources
Identify anomalies and unexplained changes
Use validation checks rather than assumptions
Explain confidence levels and limitations clearly
7. Collaboration & Communication
Work with:
finance
operations
IT
management
Present findings clearly in plain language
Respond constructively to questions and challenges
Accept feedback and revise analysis as needed
Required Skills & Competencies
Analytical & Professional Skills
Curiosity and skepticism
Attention to detail
Comfort asking clarifying questions
Willingness to document work
Ability to explain complex ideas simply
Technical Skills (Baseline)
Excel (intermediate level or higher)
Basic SQL (SELECT, JOIN, GROUP BY)
Familiarity with Power BI or similar BI tools
Comfort using AI tools for coding, explanation, and documentation
Candidates are not expected to know everything on day one.
Preferred Qualifications
Degree in:
Finance
Accounting
Economics
Data Analytics
Information Systems
Engineering
Public Administration
Internship or project experience involving data analysis
Exposure to:
budgeting
forecasting
cost allocation
operational metrics
What Success Looks Like (First 12–18 Months)
A successful analyst in this role will be able to:
independently build and explain Excel models
write and validate SQL queries with AI assistance
maintain Power BI reports without breaking definitions
document their work clearly
flag issues early rather than hiding uncertainty
earn trust by being transparent and disciplined
What This Role Is Not
This role is not:
a pure programmer role
a dashboard-only role
a “press the button” reporting job
a role that values speed over accuracy
Why This Role Matters
Organizations increasingly fail not because they lack data, but because:
logic is undocumented
assumptions are hidden
systems are fragile
knowledge walks out the door
This role exists to prevent that.
Closing Note to Candidates
You do not need to be an expert in every tool.
You do need to:
think clearly,
communicate honestly,
learn continuously,
and use AI responsibly.
If you can do that, the tools will follow.
Appendix E
Interview Questions a Strong Analyst Should Ask
(And Why the Answers Matter)
This appendix is written for candidates — especially early-career analysts — who want to succeed, grow, and contribute meaningfully.
These are not technical questions. They are questions about whether the environment supports good analytical work.
A thoughtful organization will welcome these questions. An uncomfortable response is itself an answer.
1. Will I Have Timely Access to the Data I’m Expected to Analyze?
Why this matters
Analysts fail more often from lack of access than lack of ability.
If key datasets (such as utility billing, payroll, permitting, or ridership data) require long approval chains, partial access, or repeated manual requests, analysis stalls. Long delays force analysts to restart work cold, which is inefficient and demoralizing.
A healthy environment has:
clear data access rules,
predictable turnaround times,
and documented data sources.
2. Will I Be Able to Work in Focused Blocks of Time?
Why this matters
Analytical work requires concentration and continuity.
If an analyst’s day is fragmented by:
constant meetings,
urgent ad-hoc requests,
unrelated administrative tasks,
then even talented analysts struggle to make progress. Repeated interruptions over days or weeks force constant re-learning and increase error risk.
Strong teams protect at least some uninterrupted time for deep work.
3. How Often Are Priorities Changed Once Work Has Started?
Why this matters
Changing priorities is normal. Constant resets are not.
Frequent shifts without closure:
waste effort,
erode confidence,
and prevent analysts from seeing work through to completion.
A good environment allows:
exploratory work,
followed by stabilization,
followed by delivery.
Analysts grow fastest when they can complete full analytical cycles.
4. Will I Be Asked to Do Significant Work Outside the Role You’re Hiring Me For?
Why this matters
Early-career analysts often fail because they are overloaded with tasks unrelated to analysis:
ad-hoc administrative work,
manual data entry,
report formatting unrelated to insights,
acting as an informal IT support desk.
This dilutes skill development and leads to frustration.
A strong role respects analytical focus while allowing reasonable cross-functional exposure.
5. Where Will This Role Sit Organizationally?
Why this matters
Analysts thrive when they are close to:
decision-makers,
subject-matter experts,
and the business context.
Being housed in IT can be appropriate in some organizations, but analysts often succeed best when:
they are embedded in finance, operations, or planning,
with strong, cooperative support from IT, not ownership by IT.
Clear role placement reduces confusion about expectations and priorities.
6. What Kind of Support Will I Have from IT?
Why this matters
Analysts do not need IT to do their work for them — but they do need:
help with access,
guidance on standards,
and assistance when systems issues arise.
A healthy environment has:
defined IT support pathways,
mutual respect between analysts and IT,
and shared goals around data quality and security.
Adversarial or unclear relationships slow everyone down.
7. Will I Be Encouraged to Document My Work — and Given Time to Do So?
Why this matters
Documentation is often praised but rarely protected.
If analysts are rewarded only for speed and output, documentation becomes the first casualty. This creates fragile systems and makes handoffs painful.
Strong organizations:
value documentation,
allow time for it,
and recognize it as part of the job, not overhead.
8. How Will Success Be Measured in the First Year?
Why this matters
Vague success criteria create anxiety and misalignment.
A healthy answer includes:
skill development,
reliability,
learning the organization’s data,
and increasing independence over time.
Early-career analysts need space to learn without fear of being labeled “slow.”
9. What Happens When Data or Assumptions Are Unclear?
Why this matters
No dataset is perfect.
Analysts succeed when:
questions are welcomed,
assumptions are discussed openly,
and uncertainty is handled professionally.
An environment that discourages questions or punishes transparency leads to quiet errors and loss of trust.
10. Will I Be Allowed — and Encouraged — to Use Modern Tools Responsibly?
Why this matters
Analysts today learn and work using tools like:
Excel,
SQL,
Power BI,
and AI-assisted analysis.
If these tools are discouraged, restricted without explanation, or treated with suspicion, analysts are forced into inefficient workflows. In many cases, the latest versions with added features can prove better productivity. Is the organization more than 1-2 years behind in updating at the present time? What are the views of key players about AI?
Strong organizations focus on:
governance,
validation,
and responsible use — not blanket prohibition.
11. How Are Analytical Mistakes Handled?
Why this matters
Mistakes happen — especially while learning.
The question is whether the culture responds with:
learning and correction, or
blame and fear.
Analysts grow fastest in environments where:
mistakes are surfaced early,
corrected openly,
and used to improve systems.
12. Who Will I Learn From?
Why this matters
Early-career analysts need:
examples,
feedback,
and mentorship.
Even informal guidance matters.
A thoughtful answer shows the organization understands that analysts are developed, not simply hired.
Closing Note to Candidates
These questions are not confrontational. They are professional.
Organizations that welcome them are more likely to:
retain talent,
produce reliable analysis,
and build durable systems.
If an organization cannot answer these questions clearly, it does not mean it is a bad place — but it may not yet be a good place for an analyst to thrive.
Appendix F
A Necessary Truce: IT Control, Analyst Access, and the Role of Sandboxes
One of the most common — and understandable — tensions in modern organizations sits at the boundary between IT and analytical staff.
It usually sounds like this:
“We can’t let anyone outside IT touch live databases.”
On this point, IT is absolutely right.
Production systems exist to:
run payroll,
bill customers,
issue checks,
post transactions,
and protect sensitive information.
They must be:
stable,
secure,
auditable,
and minimally disturbed.
No serious analyst disputes this.
But here is the equally important follow-up question — one that often goes unspoken:
If analysts cannot access live systems, do they have access to a safe, current analytical environment instead?
Production Is Not the Same Thing as Analysis
The core misunderstanding is not about permission. It is about purpose.
Production systems are built to execute transactions correctly.
Analytical systems are built to understand what happened.
These are different jobs, and they should live in different places.
IT departments already understand this distinction in principle. The question is whether it has been implemented in practice.
The Case for Sandboxes and Analytical Mirrors
A well-run organization does not give analysts access to live transactional tables.
Instead, it provides:
read-only mirrors
overnight refreshes at a minimum
restricted, de-identified datasets
clearly defined analytical schemas
This is not radical. It is standard practice in mature organizations.
What a Sandbox Actually Is
A sandbox is:
a copy of production data,
refreshed on a schedule (often nightly),
isolated from operational systems,
and safe to explore without risk.
Analysts can:
query freely,
build models,
validate logic,
and document findings
…without the possibility of disrupting operations.
A Practical Example: Payroll and Personnel Data
Payroll is often cited as the most sensitive system — and rightly so.
But here is the practical reality:
Most analytical work does not require:
Social Security numbers
bank account details
wage garnishments
benefit elections
direct deposit instructions
What analysts do need are things like:
position counts
departments
job classifications
pay grades
hours worked
overtime
trends over time
A Payroll / Personnel sandbox can be created that:
mirrors the real payroll tables,
strips or masks protected fields,
replaces SSNs with surrogate keys,
removes fields irrelevant to analysis,
refreshes nightly from production
This allows analysts to answer questions such as:
How is staffing changing?
Where is overtime increasing?
What are vacancy trends?
How do personnel costs vary by department or function?
All without exposing sensitive personal data.
This is not a compromise of security. It is an application of data minimization, a core security principle.
Why This Matters More Than IT Realizes
When analysts lack access to safe, current analytical data, several predictable failures occur:
Analysts rely on stale exports
Logic is rebuilt repeatedly from scratch
Results drift from official numbers
Trust erodes between departments
Decision-makers get inconsistent answers
Ironically, over-restriction often increases risk, because:
people copy data locally,
spreadsheets proliferate,
and controls disappear entirely.
A well-designed sandbox reduces risk by centralizing access under governance.
What IT Is Right to Insist On
IT is correct to insist on:
no write access
no direct production access
strong role-based security
auditing and logging
clear ownership of schemas
documented refresh processes
None of that is negotiable.
But those safeguards are fully compatible with analyst access — if access is provided in the right environment.
What Analysts Are Reasonably Asking For
Analysts are not asking to:
run UPDATE statements on live tables
bypass security controls
access protected personal data
manage infrastructure
They are asking for:
timely access to analytical copies of data
predictable refresh schedules
stable schemas
and the ability to do their job without constant resets
That is a governance problem, not a personnel problem.
The Ideal Operating Model
In a healthy organization:
IT owns production systems
IT builds and governs analytical mirrors
Analysts work in sandboxes
Finance and operations define meaning
Validation ties analysis back to production totals
Everyone wins
This model:
protects systems,
protects data,
supports analysis,
and builds trust.
Why This Belongs in This Series
Earlier appendices described:
the skills of the modern analyst,
the questions analysts should ask,
and the environments that cause analysts to fail or succeed.
This appendix addresses a core environmental reality:
Analysts cannot succeed without access — and access does not require risk.
The solution is not fewer analysts or tighter gates. The solution is better separation between production and analysis.
A Final Word to IT, Finance, and Leadership
This is not an argument against IT control.
It is an argument for IT leadership.
The most effective IT departments are not those that say “no” most often — they are the ones that say:
“Here is the safe way to do this.”
Sandboxes, data warehouses, and analytical mirrors are not luxuries. They are the infrastructure that allows modern organizations to think clearly without breaking what already works.
Closing Note on the Appendices
These appendices complete the framework:
The main essay explains the stack
The follow-up explains how to direct AI
These appendices make it operational
Together, they describe not just how to use AI—but how to use it responsibly, professionally, and durably.
A technical framework for staffing, facilities, and cost projection
Abstract
In local government forecasting, population is the dominant driver of service demand, staffing requirements, facility needs, and operating costs. While no municipal system can be forecast with perfect precision, population-based models—when properly structured—produce estimates that are sufficiently accurate for planning, budgeting, and capital decision-making. Crucially, population growth in cities is not a sudden or unknowable event.
Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, population arrival is observable months or years in advance. This paper presents population not merely as a driver, but as a leading indicator, and demonstrates how cities can convert development approvals into staged population forecasts that support rational staffing, facility sizing, capital investment, and operating cost projections.
1. Introduction: Why population sits at the center
Local governments exist to provide services to people. Police protection, fire response, streets, parks, water, sanitation, administration, and regulatory oversight are all mechanisms for supporting a resident population and the activity it generates. While policy choices and service standards influence how services are delivered, the volume of demand originates with population.
Practitioners often summarize this reality informally:
“Tell me the population, and I can tell you roughly how many police officers you need. If I know the staff, I can estimate the size of the building. If I know the size, I can estimate the construction cost. If I know the size, I can estimate the electricity bill.”
This paper formalizes that intuition into a defensible forecasting framework and addresses a critical objection: population is often treated as uncertain or unknowable. In practice, population growth in cities is neither sudden nor mysterious—it is permitted into existence through public processes that unfold over years.
2. Population as a base driver, not a single-variable shortcut
Population does not explain every budget line, but it explains most recurring demand when paired with a small number of modifiers.
At its core, many municipal services follow this structure:
While individual events vary, aggregate demand scales with population.
3.2 Capacity, not consumption, drives budgets
Municipal budgets fund capacity, not just usage:
Staff must be available before calls occur
Facilities must exist before staff are hired
Vehicles and equipment must be in place before service delivery
Capacity decisions are inherently population-driven.
4. Population growth is observable before it arrives
A defining feature of local government forecasting—often underappreciated—is that population growth is authorized through public approvals long before residents appear in census or utility data.
Population does not “arrive”; it progresses through a pipeline.
5. The development pipeline as a population forecasting timeline
5.1 Annexation: strategic intent (years out)
Annexation establishes:
Jurisdictional responsibility
Long-term service obligations
Future land-use authority
While annexation does not create immediate population, it signals where population will eventually be allowed.
Forecast role:
Long-range horizon marker
Infrastructure and service envelope planning
Typical lead time: 3–10 years
5.2 Zoning: maximum theoretical population
Zoning converts land into entitled density.
From zoning alone, cities can estimate:
Maximum dwelling units
Maximum population at buildout
Long-run service ceilings
Zoning defines upper bounds, even if timing is uncertain.
Forecast role:
Long-range capacity planning
Useful for master plans and utility sizing
Typical lead time: 3–7 years
5.3 Preliminary plat: credible development intent
Preliminary plat approval signals:
Developer capital commitment
Defined lot counts
Identified phasing
Population estimates become quantifiable, even if delivery timing varies.
Forecast role:
Medium-high certainty population
First stage for phased population modeling
Typical lead time: 1–3 years
5.4 Final plat: scheduled population
Final plat approval:
Legally creates lots
Locks in density and configuration
Triggers infrastructure construction
Impact Fees & other costs are committed
At this point, population arrival is no longer speculative.
Once streets, utilities, and drainage are built, population arrival becomes physically constrained by construction schedules.
Forecast role:
Narrow timing window
Supports staffing lead-time decisions
Typical lead time: 6–18 months
5.6 Water meter connections: imminent occupancy
Water meters are one of the most reliable near-term indicators:
Each residential meter ≈ one household
Installations closely precede vertical construction
Forecast role:
Quarterly or monthly population forecasting
Just-in-time operational scaling
Typical lead time: 1–6 months
5.7 Certificates of Occupancy: population realized
Certificates of occupancy convert permitted population into actual population.
At this point:
Service demand begins immediately
Utility consumption appears
Forecasts can be validated
Forecast role:
Confirmation and calibration
Not prediction
6. Population forecasting as a confidence ladder
Development Stage
Population Certainty
Timing Precision
Planning Use
Annexation
Low
Very low
Strategic
Zoning
Low–Medium
Low
Capacity envelopes
Preliminary Plat
Medium
Medium
Phased planning
Final Plat
High
Medium–High
Budget & staffing
Infrastructure Built
Very High
High
Operational prep
Water Meters
Extremely High
Very High
Near-term ops
COs
Certain
Exact
Validation
Population forecasting in cities is therefore graduated, not binary.
7. From population to staffing
Once population arrival is staged, staffing can be forecast using service-specific ratios and fixed minimums.
7.1 Police example (illustrative ranges)
Sworn officers per 1,000 residents commonly stabilize within broad bands depending on service level and demand, also tied to known local ratios:
Lower demand: ~1.2–1.8
Moderate demand: ~1.8–2.4
High demand: ~2.4–3.5+
Civilian support staff often scale as a fraction of sworn staffing.
The appropriate structure is:Officers=αpolice+βpolice⋅Population
Where α accounts for minimum 24/7 coverage and supervision.
7.2 General government staffing
Administrative staffing scales with:
Population
Number of employees
Asset inventory
Transaction volume
A fixed core plus incremental per-capita growth captures this reality more accurately than pure ratios.
8. From staffing to facilities
Facilities are a function of:
Headcount
Service configuration
Security and public access needs
A practical planning method:Facility Size=FTE⋅Gross SF per FTE
Typical blended civic office planning ranges usually fall within:
~175–300 gross SF per employee
Specialized spaces (dispatch, evidence, fleet, courts) are layered on separately.
9. From facilities to capital and operating costs
9.1 Capital costs
Capital expansion costs are typically modeled as:Capex=Added SF⋅Cost per SF⋅(1+Soft Costs)
Where soft costs include design, permitting, contingencies, and escalation.
9.2 Operating costs
Facility operating costs scale predictably with size:
Electricity: kWh per SF per year
Maintenance: % of replacement value or $/SF
Custodial: $/SF
Lifecycle renewals
Electricity alone can be reasonably estimated as:Annual Cost=SF⋅kWh/SF⋅$/kWh
This is rarely exact—but it is directionally reliable.
10. Key modifiers that refine population models
Population alone is powerful but incomplete. High-quality forecasts adjust for:
Density and land use
Daytime population and employment
Demographics
Service standards
Productivity and technology
Geographic scale (lane miles, acres)
These modifiers refine, but do not replace, population as the base driver.
11. Why growth surprises cities anyway
When cities claim growth was “unexpected,” the issue is rarely lack of information. More often:
Development signals were not integrated into finance models
Staffing and capital planning lagged approvals
Fixed minimums were ignored
Threshold effects (new stations, expansions) were deferred too long
Growth that appears sudden is usually forecastable growth that was not operationalized.
12. Conclusion
Population is the primary driver of local government demand, but more importantly, it is a predictable driver. Through annexation, zoning, platting, infrastructure construction, utility connections, and certificates of occupancy, cities possess a multi-year advance view of population arrival.
This makes it possible to:
Phase staffing rationally
Time facilities before overload
Align capital investment with demand
Improve credibility with councils, auditors, and rating agencies
In local government, population growth is not a surprise. It is a permitted, engineered, and scheduled outcome of public decisions. A forecasting system that treats population as both a driver and a leading indicator is not speculative—it is simply paying attention to the city’s own approvals.
Appendix A
Defensibility of Population-Driven Forecasting Models
A response framework for auditors, rating agencies, and governing bodies
Purpose of this appendix
This appendix addresses a common concern raised during budget reviews, audits, bond disclosures, and council deliberations:
“Population-based forecasts seem too simplistic or speculative.”
The purpose here is not to argue that population is the only factor affecting local government costs, but to demonstrate that population-driven forecasting—when anchored to development approvals and adjusted for service standards—is methodologically sound, observable, and conservative.
A.1 Population forecasting is not speculative in local government
A frequent misconception is that population forecasts rely on demographic projections or external estimates. In practice, this model relies primarily on the city’s own legally binding approvals.
Population growth enters the forecast only after it has passed through:
Annexation agreements
Zoning entitlements
Preliminary and final plats
Infrastructure construction
Utility connections
Certificates of occupancy
These are public, documented actions, not assumptions.
Key distinction for reviewers: This model does not ask “How fast might the city grow?” It asks “What growth has the city already approved, and when will it become occupied?”
A.2 Population is treated as a leading indicator, not a lagging one
Traditional population measures (census counts, ACS estimates) are lagging indicators. This model explicitly avoids relying on those for near-term forecasting.
Instead, it uses development milestones as leading indicators, each with increasing certainty and narrower timing windows.
For audit and disclosure purposes:
Early-stage entitlements affect only long-range capacity planning
Staffing and capital decisions are triggered only at later, high-certainty stages
Near-term operating impacts are tied to utility connections and COs
This layered approach prevents premature spending while avoiding reactive under-staffing.
A.3 Fixed minimums prevent over-projection in small or slow-growth cities
A common audit concern is that per-capita models overstate staffing needs.
This model explicitly separates:
Fixed baseline capacity (α)
Incremental population-driven capacity (β)
This structure:
Prevents unrealistic staffing increases in early growth stages
Operating costs scale predictably with assets and space.
The model is transparent, testable, and adjustable.
Therefore: A population-driven forecasting model of this type represents a prudent, defensible, and professionally reasonable approach to long-range municipal planning.
Appendix B
Consequences of Failing to Anticipate Population Growth
A diagnostic review of reactive municipal planning
Purpose of this appendix
This appendix describes common failure patterns observed in cities that do not systematically link development approvals to population, staffing, and facility planning. These outcomes are not the result of negligence or bad intent; they typically arise from fragmented information, short planning horizons, or the absence of an integrated forecasting framework.
The patterns described below are widely recognized in municipal practice and are offered to illustrate the practical risks of reactive planning.
B.1 “Surprise growth” that was not actually a surprise
A frequent narrative in reactive cities is that growth “arrived suddenly.” In most cases, the growth was visible years earlier through zoning approvals, plats, or utility extensions but was not translated into staffing or capital plans.
Common indicators:
Approved subdivisions not reflected in operating forecasts
Development tracked only by planning staff, not finance or operations
Population discussed only after occupancy
Consequences:
Budget shocks
Emergency staffing requests
Loss of credibility with governing bodies
B.2 Knee-jerk staffing reactions
When growth impacts become unavoidable, reactive cities often respond through hurried staffing actions.
Typical symptoms:
Mid-year supplemental staffing requests
Heavy reliance on overtime
Accelerated hiring without workforce planning
Training pipelines overwhelmed
Consequences:
Elevated labor costs
Increased burnout and turnover
Declining service quality during growth periods
Inefficient long-term staffing structures
B.3 Under-sizing followed by over-correction
Without forward planning, cities often alternate between two extremes:
Under-sizing due to conservative or delayed response
Over-sizing in reaction to service breakdowns
Examples:
Facilities built too small “to be safe”
Rapid expansions shortly after completion
Swing from staffing shortages to excess capacity
Consequences:
Higher lifecycle costs
Poor space utilization
Perception of waste or mismanagement
B.4 Obsolete facilities at the moment of completion
Facilities planned without reference to future population often open already constrained.
Common causes:
Planning based on current headcount only
Ignoring entitled but unoccupied development
Failure to include expansion capability
Consequences:
Expensive retrofits
Disrupted operations during expansion
Shortened facility useful life
This is one of the most costly errors because capital investments are long-lived and difficult to correct.
B.5 Deferred capital followed by crisis-driven spending
Reactive cities often delay capital investment until systems fail visibly.
Typical patterns:
Fire stations added only after response times degrade
Police facilities expanded only after overcrowding
Utilities upgraded only after service complaints
Consequences:
Emergency procurement
Higher construction costs
Increased debt stress
Lost opportunity for phased financing
B.6 Misalignment between departments
When population intelligence is not shared across departments:
Planning knows what is coming
Finance budgets based on current year
Operations discover impacts last
Consequences:
Conflicting narratives to council
Fragmented decision-making
Reduced trust between departments
Population-driven forecasting provides a common factual baseline.
B.7 Overreliance on lagging indicators
Reactive cities often rely heavily on:
Census updates
Utility consumption after occupancy
Service call increases
These indicators confirm growth after it has already strained capacity.
Consequences:
Persistent lag between demand and response
Structural understaffing
Continual “catch-up” budgeting
B.8 Political whiplash and credibility erosion
Unanticipated growth pressures often force councils into repeated difficult votes:
Emergency funding requests
Mid-year budget amendments
Rapid debt authorizations
Over time, this leads to:
Voter skepticism
Council fatigue
Reduced tolerance for legitimate future investments
Planning failures become governance failures.
B.9 Inefficient use of taxpayer dollars
Ironically, reactive planning often costs more, not less.
Cost drivers include:
Overtime premiums
Compressed construction schedules
Retrofit and rework costs
Higher borrowing costs due to rushed timing
Proactive planning spreads costs over time and reduces risk premiums.
B.10 Organizational stress and morale impacts
Staff experience growth pressures first.
Observed impacts:
Chronic overtime
Inadequate workspace
Equipment shortages
Frustration with leadership responsiveness
Over time, this contributes to:
Higher turnover
Loss of institutional knowledge
Reduced service consistency
B.11 Why these failures persist
These patterns are not caused by incompetence. They persist because:
Growth information is siloed
Forecasting is viewed as speculative
Political incentives favor short-term restraint
Capital planning horizons are too short
Absent a formal framework, cities default to reaction.
B.12 Summary for governing bodies
Cities that do not integrate development approvals into population-driven forecasting commonly experience:
Perceived “surprise” growth
Emergency staffing responses
Repeated under- and over-sizing
Facilities that age prematurely
Higher long-term costs
Organizational strain
Reduced public confidence
None of these outcomes are inevitable. They are symptoms of not using information the city already has.
B.13 Closing observation
The contrast between proactive and reactive cities is not one of optimism versus pessimism. It is a difference between:
Anticipation versus reaction
Sequencing versus scrambling
Planning versus explaining after the fact
Population-driven forecasting does not eliminate uncertainty. It replaces surprise with preparation.
Appendix C
Population Readiness & Forecasting Discipline Checklist
A self-assessment for proactive versus reactive cities
Purpose: This checklist allows a city to evaluate whether it is systematically anticipating population growth—or discovering it after impacts occur. It is designed for use by city management teams, finance directors, auditors, and governing bodies.
How to use: For each item, mark:
✅ Yes / In place
⚠️ Partially / Informal
❌ No / Not done
Patterns matter more than individual answers.
Section 1 — Visibility of Future Population
C-1 Do we maintain a consolidated list of annexed, zoned, and entitled land with estimated buildout population?
C-2 Are preliminary and final plats tracked in a format usable by finance and operations (not just planning)?
C-3 Do we estimate population by development phase, not just at full buildout?
C-4 Is there a documented method for converting lots or units into population (household size assumptions reviewed periodically)?
C-5 Do we distinguish between long-range potential growth and near-term probable growth?
Red flag: Population is discussed primarily in narrative terms (“fast growth,” “slowing growth”) rather than quantified and staged.
Section 2 — Timing and Lead Indicators
C-6 Do we identify which development milestone triggers planning action (e.g., preliminary plat vs final plat)?
C-7 Are infrastructure completion schedules incorporated into population timing assumptions?
C-8 Are water meter installations or equivalent utility connections tracked and forecasted?
C-9 Do we use certificates of occupancy to validate and recalibrate population forecasts annually?
C-10 Is population forecasting treated as a rolling forecast, not a once-per-year estimate?
Red flag: Population is updated only when census or ACS data is released.
Section 3 — Staffing Linkage
C-11 Does each major department have an identified population or workload driver?
C-12 Are fixed minimum staffing levels explicitly separated from growth-driven staffing?
C-13 Are staffing increases tied to forecasted population arrival, not service breakdowns?
C-14 Do hiring plans account for lead times (recruitment, academies, training)?
C-15 Can we explain recent staffing increases as either:
population growth, or
explicit policy/service-level changes?
Red flag: Staffing requests frequently cite “we are behind” without reference to forecasted growth.
Section 4 — Facilities and Capital Planning
C-16 Are facility size requirements derived from staffing projections, not current headcount?
C-17 Do capital plans include expansion thresholds (e.g., headcount or service load triggers)?
C-18 Are new facilities designed with future expansion capability?
C-19 Are entitled-but-unoccupied developments considered when evaluating future facility adequacy?
C-20 Do we avoid building facilities that are at or near capacity on opening day?
Red flag: Facilities require major expansion within a few years of completion.
Section 5 — Operating Cost Awareness
C-21 Are operating costs (utilities, maintenance, custodial) modeled as a function of facility size and assets?
C-22 Are utility cost impacts of expansion estimated before facilities are approved?
C-23 Do we understand how population growth affects indirect departments (HR, IT, finance)?
C-24 Are lifecycle replacement costs considered when adding capacity?
Red flag: Operating cost increases appear as “unavoidable surprises” after facilities open.
Section 6 — Cross-Department Integration
C-25 Do planning, finance, and operations use the same population assumptions?
C-26 Is growth discussed in joint meetings, not only within planning?
C-27 Does finance receive regular updates on development pipeline status?
C-28 Are growth assumptions documented and shared, not implicit or informal?
Red flag: Different departments give different growth narratives to council.
Section 7 — Governance and Transparency
C-29 Can we clearly explain to council why staffing or capital is needed before service failure occurs?
C-30 Are population-driven assumptions documented in budget books or CIP narratives?
C-31 Do we distinguish between:
growth-driven needs, and
discretionary service enhancements?
C-32 Can auditors or rating agencies trace growth-related decisions back to documented approvals?
Red flag: Growth explanations rely on urgency rather than evidence.
Section 8 — Validation and Learning
C-33 Do we compare forecasted population arrival to actual COs annually?
C-34 Are forecasting errors analyzed and corrected rather than ignored?
C-35 Do we adjust household size, absorption rates, or timing assumptions over time?
Red flag: Forecasts remain unchanged year after year despite clear deviations.
Scoring Interpretation (Optional)
Mostly ✅ → Proactive, anticipatory city
Mix of ✅ and ⚠️ → Partially planned, risk of reactive behavior
Many ❌ → Reactive city; growth will feel like a surprise
A city does not need perfect scores. The presence of structure, documentation, and sequencing is what matters.
Closing Note for Leadership
If a city can answer most of these questions affirmatively, it is not guessing about growth—it is managing it. If many answers are negative, the city is likely reacting to outcomes it had the power to anticipate.
Population growth does not cause planning problems. Ignoring known growth signals does.
Appendix D
Population-Driven Planning Maturity Model
A framework for assessing and improving municipal forecasting discipline
Purpose of this appendix
This maturity model describes how cities evolve in their ability to anticipate population growth and translate it into staffing, facility, and financial planning. It recognizes that most cities are not “good” or “bad” planners; they are simply at different stages of organizational maturity.
Each level builds logically on the prior one. Advancement does not require perfection—only structure, integration, and discipline.
Level 1 — Reactive City
“We didn’t see this coming.”
Characteristics
Population discussed only after impacts are felt
Reliance on census or anecdotal indicators
Growth described qualitatively (“exploding,” “slowing”)
Staffing added only after service failure
Capital projects triggered by visible overcrowding
Frequent mid-year budget amendments
Typical behaviors
Emergency staffing requests
Heavy overtime usage
Facilities opened already constrained
Surprise operating cost increases
Organizational mindset
Growth is treated as external and unpredictable.
Risks
Highest long-term cost
Lowest credibility with councils and rating agencies
Chronic organizational stress
Level 2 — Aware but Unintegrated City
“Planning knows growth is coming, but others don’t act on it.”
Characteristics
Development pipeline tracked by planning
Finance and operations not fully engaged
Growth acknowledged but not quantified in budgets
Capital planning still reactive
Limited documentation of assumptions
Typical behaviors
Late staffing responses despite known development
Facilities planned using current headcount
Disconnect between planning reports and budget narratives
Organizational mindset
Growth is known, but not operationalized.
Risks
Continued surprises
Internal frustration
Mixed messages to council
Level 3 — Structured Forecasting City
“We model growth, but execution lags.”
Characteristics
Population forecasts tied to development approvals
Preliminary staffing models exist
Fixed minimums recognized
Capital needs identified in advance
Forecasts updated annually
Typical behaviors
Better budget explanations
Improved CIP alignment
Still some late responses due to execution gaps
Organizational mindset
Growth is forecastable, but timing discipline is still developing.
Strengths
Credible analysis
Reduced emergencies
Clearer governance conversations
Level 4 — Integrated Planning City
“Approvals, staffing, and capital move together.”
Characteristics
Development pipeline drives population timing
Staffing plans phased to population arrival
Facility sizing based on projected headcount
Operating costs modeled from assets
Cross-department coordination is routine
Typical behaviors
Hiring planned ahead of demand
Facilities open with expansion capacity
Capital timed to avoid crisis spending
Clear audit trail from approvals to costs
Organizational mindset
Growth is managed, not reacted to.
Benefits
Stable service delivery during growth
Higher workforce morale
Strong credibility with governing bodies
Level 5 — Adaptive, Data-Driven City
“We learn, recalibrate, and optimize continuously.”
Characteristics
Rolling population forecasts
Development milestones tracked in near-real time
Annual validation against COs and utility data
Forecast errors analyzed and corrected
Scenario modeling for alternative growth paths
Typical behaviors
Minimal surprises
High confidence in long-range plans
Early identification of inflection points
Proactive communication with councils and investors
Organizational mindset
Growth is a controllable system, not a threat.
Benefits
Lowest lifecycle cost
Highest service reliability
Institutional resilience
Summary Table
Level
Description
Core Risk
1
Reactive
Crisis-driven decisions
2
Aware, unintegrated
Late responses
3
Structured
Execution lag
4
Integrated
Few surprises
5
Adaptive
Minimal risk
Key Insight
Most cities are not failing—they are stuck between Levels 2 and 3. The largest gains come not from sophisticated analytics, but from integration and timing discipline.
Progression does not require:
Perfect forecasts
Advanced software
Large consulting engagements
It requires:
Using approvals the city already grants
Sharing population assumptions across departments
Sequencing decisions intentionally
Closing Observation
Cities do not choose whether they grow. They choose whether growth feels like a surprise or a scheduled event.
A collaboration between Lewis McLain & AI (Suggested by Becky Brooks)
Here is a funny, light-hearted, non-offensive survey designed as if a city or organization created it, full of the same bureaucratic absurdity but tailored for someone who’s just spent a couple of weeks in jail.
It is intentionally ridiculous — the kind of tone-deaf survey a city might send, trying to measure the “customer experience.”
⸻
POST-INCARCERATION CUSTOMER SATISFACTION SURVEY
Because your feedback helps us improve the parts of the experience we had no intention of improving.
Thank you for recently spending 10–45 days with us!
Your stay matters to us, and we’d love your thoughts.
Please take 3–90 minutes to complete this survey.
⸻
SECTION 1 — OVERALL EXPERIENCE
1. How satisfied were you with your recent incarceration?
• ☐ Very Satisfied
• ☐ Satisfied
• ☐ Neutral (emotionally or spiritually)
• ☐ Dissatisfied
• ☐ Very Dissatisfied
• ☐ I would like to speak to the manager of jail, please
2. Would you recommend our facility to friends or family?
• ☐ Yes, absolutely
• ☐ Only if they deserve it
• ☐ No, but I might recommend it to my ex
3. Did your stay meet your expectations?
• ☐ It exceeded them, shockingly
• ☐ It met them, sadly
• ☐ What expectations?
• ☐ I didn’t expect any of this
⸻
SECTION 2 — ACCOMMODATIONS
4. How would you rate the comfort of your sleeping arrangements?
• ☐ Five stars (would book again on Expedia)
• ☐ Three stars (I’ve slept on worse couches)
• ☐ One star (my back may sue you)
• ☐ Zero stars (please never ask this again)
5. How would you describe room service?
• ☐ Prompt and professional
• ☐ Present
• ☐ Sporadic
• ☐ I was unaware room service was an option
• ☐ Wait… was that what breakfast was supposed to be?
⸻
SECTION 3 — DINING EXPERIENCE
6. Rate the culinary artistry of our meals:
• ☐ Michelin-worthy
• ☐ Edible with effort
• ☐ Mysterious but survivable
• ☐ I have questions that science cannot answer
7. Did you enjoy the variety of menu options?
• ☐ Yes
• ☐ No
• ☐ I’m still not sure if Tuesday’s entrée was food
⸻
SECTION 4 — PROGRAMMING & ACTIVITIES
8. Which of the following activities did you participate in?
• ☐ Walking in circles
• ☐ Sitting
• ☐ Thinking about life
• ☐ Thinking about lunch
• ☐ Wondering why time moves slower in here
• ☐ Other (please describe your spiritual journey): ___________
9. Did your stay include any unexpected opportunities for personal growth?
• ☐ Learned patience
• ☐ Learned humility
• ☐ Learned the legal system very quickly
• ☐ Learned I never want to fill out this survey again
⸻
SECTION 5 — CUSTOMER SERVICE
10. How would you rate the friendliness of staff?
• ☐ Surprisingly pleasant
• ☐ Professionally indifferent
• ☐ “Move over there” was said with warmth
• ☐ I think they liked me
• ☐ I think they didn’t
11. Did staff answer your questions in a timely manner?
• ☐ Yes
• ☐ No
• ☐ I’m still waiting
• ☐ I learned not to ask questions
⸻
SECTION 6 — RELEASE PROCESS
12. How smooth was your release experience?
• ☐ Smooth
• ☐ Mostly smooth
• ☐ Bumpy
• ☐ Like trying to exit a maze blindfolded
13. Upon release, did you feel ready to re-enter society?
• ☐ Yes, I am reborn
• ☐ Somewhat
• ☐ Not at all
• ☐ Please define “ready”
⸻
SECTION 7 — FINAL COMMENTS
14. If you could change one thing about your stay, what would it be?
(Please choose only one):
• ☐ The walls
• ☐ The food
• ☐ The schedule
• ☐ The length of stay
• ☐ All of the above
• ☐ I decline to answer on advice of counsel
15. Additional feedback for management:
⸻
⸻
(Comments will be carefully reviewed by someone someday.)
⸻
Thank You!
Your answers will be used to improve future guest experiences,*
A collaboration between Lewis McLain & AI A long answer to a short question from Tuesday Morning Men’s Bible Study
“Granddad… my faith is slipping.”
“Granddad, can I tell you something and you won’t think less of me? I feel like my faith in God is slipping away. I’ve prayed—truly prayed—for our family to heal, for hearts to soften, for conversations about the Lord to open again. These aren’t selfish prayers. They’re for relationships to be mended, for love to return, for estrangements to disappear.
But nothing changes. Some hearts grow colder. And any mention of God shuts everything down.
Why doesn’t God answer these good prayers? Why is He silent when the need is so great? I don’t want to lose my faith, Granddad… but I don’t know how much more silence or tension I can take.”
**THE GRANDFATHER’S ANSWER:
A Loving Reassurance About the Awakening—The Kairos Moment God Has Appointed**
Come here, child. Sit beside me. I want to tell you something about God’s timing, something Scripture calls kairos—the appointed moment, the perfectly chosen hour when God reaches the heart in a way no human effort ever could.
Before any other story, let’s start with the one Jesus Himself told.
THE PRODIGAL SON: THE PATTERN OF ALL AWAKENINGS
(Luke 15:11–24)
A young man demands his inheritance, leaves home, and wastes everything in reckless living (vv. 12–13). When famine comes, he takes the lowest job imaginable—feeding pigs—and even longs to eat their food (vv. 14–16).
Then comes the sentence that describes every true spiritual awakening:
“But when he came to himself…” (Luke 15:17)
That is the kairos moment.
What exactly happened in that moment?
Reality shattered illusion. He saw his condition honestly for the first time.
Memory returned. He remembered his father’s goodness.
Identity stirred. He realized, “This is not who I am.”
Hope flickered. “My father’s servants have bread enough…”
The will turned. “I will arise and go to my father.” (v. 18)
Notice something important:
No one persuaded him.
No sermon reached him.
No family member argued with him.
No timeline pressured him.
His awakening came when the Father’s timing made his heart ready.
The father in the story doesn’t chase him into the far country. He waits. He watches. He trusts the process of grace.
And “while he was still a long way off,” the father sees him and runs (v. 20).
Why this matters for your prayers:
You’re praying for the very thing Jesus describes here. But the awakening of a heart—any heart—comes as God’s gift, in God’s hour, through God’s patient love.
The Prodigal Son shows us: God can change a life in a single moment. But He decides when that moment arrives.
This is the foundation. Now let me walk you through the other stories that prove this pattern again and again.
1. Jacob at Peniel — The Wrestling That Revealed His True Self
(Genesis 32:22–32)
Jacob spent years relying on himself. But his heart did not change— not through blessings, not through hardship, not through distance.
Only when God wrestled him in the night and touched his hip (v. 25) did Jacob awaken.
This was his kairos:
When his strength failed, his faith was born.
He limped away, but walked new— with a new name, a new identity, and a new dependence on God.
2. Nebuchadnezzar — One Glance That Restored His Sanity
(Daniel 4:28–37)
After years of pride, exile, and madness, his turning point wasn’t long or gradual. It happened in one second:
“I lifted my eyes to heaven, and my sanity was restored.” (Dan. 4:34)
The moment he looked up was the moment God broke through.
Kairos is when God uses a single upward glance to undo years of blindness.
3. Jonah — The Awakening in the Deep
(Jonah 2)
Jonah ran from God’s call until he reached the bottom of the sea. Only there, trapped in the fish, did Scripture say:
“When my life was fainting away, I remembered the LORD.” (Jonah 2:7)
That remembering? That was kairos.
When every escape ended, God opened his eyes.
4. David — Truth Striking in One Sentence
(2 Samuel 12; Psalm 51)
Nathan’s story awakened what months of hidden sin could not. When Nathan said, “You are the man” (2 Sam. 12:7), David’s heart broke open.
He went from blindness to confession instantly:
“I have sinned against the LORD.” (v. 13)
Psalm 51 pours out the repentance birthed in that moment.
Kairos often comes through truth spoken at the one moment God knows the heart can receive it.
5. Peter — The Rooster’s Cry and Jesus’ Look
(Luke 22:54–62)
After Peter’s third denial, Scripture says:
“The Lord turned and looked at Peter.” (v. 61)
That look shattered Peter’s fear and self-deception.
He went out and wept bitterly— not because he was condemned, but because he was awakened.
Kairos can be a look, a memory, a sound—something only God can time.
6. Saul — A Heart Reversed on the Damascus Road
(Acts 9:1–19)
Saul was not softening. He was escalating.
But Jesus met him at the crossroads and asked:
“Why are you persecuting Me?” (v. 4)
That question was a divine appointment—the moment Saul’s life reversed direction forever.
Kairos is when Jesus interrupts a story we thought was going one way and writes a new one.
7. What All These Stories Teach About Kairos Moments
Across all Scripture, kairos moments share the same attributes:
1. They are God-timed.
We cannot rush them. (Ecclesiastes 3:11)
2. They are God-initiated.
Awakenings are born of revelation, not persuasion. (John 6:44)
3. They break through illusion and restore reality.
“Coming to himself” means the heart finally sees truth. (Luke 15:17)
4. They lead to movement toward God.
Every awakening ends with a step homeward.
Your prayers are not being ignored. They are being gathered into the moment God is preparing.
8. Why This Matters for Your Family
You are praying for softened hearts, restored relationships, spiritual awakening. Those are kairos prayers, not chronos prayers.
Chronos is slow. Kairos is sudden.
Chronos waits. Kairos transforms.
You can’t see it yet, but God is preparing:
circumstances
conversations
memories
encounters
turning points
just like the father of the prodigal knew that hunger, hardship, and reflection would eventually lead his son home.
The father didn’t lose hope. He didn’t chase the son into the far country. He trusted that God’s timing would bring his child to the awakening moment.
You must do the same.
**9. Take Courage, Sweetheart:
The God Who Awakened Prodigals Will Awaken Hearts Again**
The Prodigal Son’s turning point didn’t look like a miracle. It looked like ordinary hunger.
David’s looked like a story. Peter’s looked like a rooster. Saul’s looked like a question. Nebuchadnezzar’s looked like a glance. Jonah’s looked like despair. Jacob’s looked like a limp.
Kairos moments rarely look divine at first. But they are.
And when God moves, hearts—no matter how hard—can turn in a single breath.
Don’t lose faith, child. The silence is not God’s absence. It is God’s preparation.
And when your family’s kairos moment comes, you will say what the father in Jesus’ story said:
“This my child was dead, and is alive again; was lost, and is found.” (Luke 15:24)
Until then, hold on. Your prayers are planting seeds that God will awaken in His perfect time.
For more than fifty years, Texas has been at the center of American redistricting law. Few states have produced as many major Supreme Court decisions shaping the meaning of the Voting Rights Act, the boundaries of racial gerrymandering doctrine, and—perhaps most significantly—the Court’s modern unwillingness to police partisan gerrymandering.
Two cases define the modern era for Texas: LULAC v. Perry (2006) and Abbott v. Perez (2018). Together, they reveal how the Court analyzes racial vote dilution, when partisan motives are permissible, how intent is inferred or rejected, and what evidentiary burdens challengers must meet.
At the heart of the Court’s reasoning is a recurring tension:
the Constitution forbids racial discrimination in redistricting,
the Voting Rights Act prohibits plans that diminish minority voting strength,
but the Court has repeatedly held that partisan advantage, even aggressive partisan advantage, is not generally unconstitutional.
Texas’s maps have allowed the Court to articulate, refine, and—many argue—narrow these doctrines.
I. LULAC v. Perry (2006): Partisan Motives Allowed, But Minority Vote Dilution Not
Background
In 2003, after winning unified control of state government, Texas Republicans enacted a mid-decade congressional redistricting plan replacing the court-drawn map used in 2002. It was an openly partisan effort to convert a congressional delegation that had favored Democrats into a Republican-leaning one.
Challengers argued:
The mid-decade redistricting itself was unconstitutional.
The legislature’s partisan intent violated the Equal Protection Clause.
The plan diluted Latino voting strength in violation of Section 2 of the Voting Rights Act, particularly in old District 23.
Several districts were racial gerrymanders, subordinating race to politics.
Arguments Before the Court
Challengers:
Texas had engaged in unprecedented partisan manipulation lacking a legitimate state purpose.
The dismantling of Latino opportunity districts—especially District 23—reduced the community’s ability to elect its preferred candidate.
Race was used as a tool to achieve partisan ends, in violation of Shaw v. Reno-line racial gerrymandering rules.
Texas:
Nothing in the Constitution forbids mid-decade redistricting.
Political gerrymandering, even when aggressive and obvious, was allowed under Davis v. Bandemer (1986).
Latino voters in District 23 were not “cohesive” enough to qualify for Section 2 protection.
District configurations reflected permissible political considerations.
The Court’s Decision
The Court’s ruling was a fractured opinion, but several clear conclusions emerged.
1. Mid-Decade Redistricting Is Constitutional
The Court held that states are not restricted to once-a-decade redistricting. Nothing in the Constitution or federal statute bars legislatures from replacing a map mid-cycle. This effectively legitimized Texas’s overtly partisan decision to redraw the map simply because political control had shifted.
The Court again declined to articulate a manageable standard for judging partisan gerrymandering. Justice Kennedy, writing for the controlling plurality, expressed concern about severe partisan abuses but concluded that no judicially administrable rule existed.
Key takeaway: Texas’s partisan motivation, even if blatant, was not itself unconstitutional.
3. Section 2 Violation in District 23: Latino Voting Strength Was Illegally Diluted
This was the major substantive ruling.
The Court found that Texas dismantled an existing Latino opportunity district (CD-23) precisely because Latino voters were on the verge of electing their preferred candidate. The legislature:
removed tens of thousands of cohesive Latino voters from the district,
replaced them with low-turnout Latino populations less likely to vote against the incumbent,
and justified the move under the guise of creating a new Latino-majority district elsewhere.
This manipulation, the Court held, denied Latino voters an equal opportunity to elect their candidate of choice, violating Section 2.
4. Racial Gerrymandering Claims Mostly Fail
The Court rejected most Shaw-type racial gerrymandering claims because plaintiffs failed to prove that race, rather than politics, predominated. This reflects a theme that becomes even stronger in later cases: when race and politics correlate—as they often do in Texas—challengers must provide powerful evidence that race, not party, drove the lines.
II. Abbott v. Perez (2018): A High Bar for Proving Discriminatory Intent
Background
After the 2010 census, Texas enacted new maps. A federal district court found that several districts were intentionally discriminatory and ordered Texas to adopt interim maps. In 2013, Texas then enacted maps that were largely identical to the court’s own interim maps.
Challengers argued that:
The original 2011 maps were passed with discriminatory intent.
The 2013 maps, though based on the court’s design, continued to embody the taint of 2011.
Multiple districts across Texas diluted minority voting strength or were racial gerrymanders.
Texas argued that:
The 2013 maps were valid because they were largely adopted from a court-approved version.
Any discriminatory intent from 2011 could not be imputed to the 2013 legislature.
Plaintiffs bore the burden of proving intentional discrimination district by district.
The Court’s Decision
In a 5–4 ruling, the Supreme Court reversed almost all findings of discriminatory intent against Texas.
1. Burden of Proof Is on Challengers, Not the State
The Court rejected the lower court’s presumption that Texas acted with discriminatory intent in 2013 merely because the 2011 legislature had been found to do so.
Key Holding: A finding of discriminatory intent in a prior map does not shift the burden; challengers must prove new intent for each new plan.
This significantly tightened the evidentiary bar.
2. Presumption of Legislative Good Faith
Chief Justice Roberts emphasized a longstanding principle:
Legislatures are entitled to a presumption of good faith unless challengers provide direct and persuasive evidence otherwise.
This presumption made it much harder to prove racial discrimination unless emails, testimony, or map-drawing files showed explicit racial motives.
Challengers failed to show that minority voters were both cohesive and systematically defeated by white bloc voting in many districts. The Court stressed the need for:
clear demographic evidence,
consistent voting patterns,
and demonstration of feasible alternative districts.
4. Only One District Violated the Constitution
The Court affirmed discrimination in Texas House District 90, where the legislature had intentionally moved Latino voters to achieve a specific racial composition.
But the Court rejected violations in every other challenged district.
5. Practical Effect: Courts Must Defer Unless Evidence Is Unusually Strong
Abbott v. Perez is widely viewed as one of the strongest modern statements of judicial deference to legislatures in redistricting—even when past discrimination has been found.
Justice Sotomayor’s dissent called the majority opinion “astonishing in its blindness.”
III. What These Cases Together Mean: Why the Court Upheld Texas’s Maps
Across both LULAC (2006) and Abbott (2018), a coherent theme emerges in the Supreme Court’s reasoning:
1. Partisan Gerrymandering Is Not the Court’s Job to Police
Unless partisan advantage clearly crosses into racial targeting, the Court will not strike it down. Texas repeatedly argued political motives, and the Court repeatedly accepted them as legitimate.
2. Racial Discrimination Must Be Proven With Specific, District-Level Evidence
Plaintiffs must demonstrate that race—not politics—predominated.
Correlation between race and partisanship is not enough.
Evidence must address each district individually.
3. Legislatures Receive a Strong Presumption of Good Faith
Abbott v. Perez reaffirmed that courts should not infer intent from
LULAC (2006) found a violation only because evidence clearly showed cohesive Latino voters whose electoral progress was intentionally undermined.
5. Courts Avoid Intruding into “Political Questions”
The Court has repeatedly signaled reluctance to take over the political process. This culminated in Rucho v. Common Cause (2019), where the Court held partisan gerrymandering claims categorically non-justiciable—a rule entirely consistent with how Texas cases were decided.
Conclusion: Why Texas Keeps Winning
Texas’s redistricting cases illustrate how the Supreme Court draws a sharp—and highly consequential—line:
Racial discrimination is unconstitutional, but must be proven with very specific evidence.
Partisan manipulation, even extreme manipulation, is permissible.
Courts defer heavily to state legislatures unless plaintiffs can clearly show that lawmakers used race as a tool, not merely politics.
In LULAC, challengers succeeded only where the evidence of racial vote dilution was unmistakable. In Abbott v. Perez, they failed everywhere except one district because intent was not proven with the level of granularity the Court demanded.
The result is that Texas has repeatedly prevailed in redistricting litigation—not necessarily because its maps are racially neutral, but because the Court has set an unusually high bar for proving racial motive and has washed its hands of partisan claims altogether.
Actually, my first financial models were on green 13-columnar tablets. If you know what I am talking about, I can get pretty close guessing your age.
Most people assume that good analysis starts with a team gathered around a whiteboard, freely offering numbers, assumptions, and ideas. In theory, it sounds collaborative and egalitarian. In reality, that moment — the blank sheet of paper — is where analysis dies. People freeze. Smart, capable, experienced people who absolutely know their business suddenly say nothing when asked to put the first assumptions down.
Early in my career, I tried it the traditional way. I’d walk into a meeting ready to do things “the right way”: engage the group, ask for their best estimates, encourage open discussion. Instead, I got silence. Eyes drifted to the table. Pens clicked. People “would have to get back to me.” Suddenly, no one knew anything. It was as if asking someone to write the first number turned the room into a library reading room during finals week — quiet, anxious, and deeply unproductive.
It took me years to understand the psychology behind this. People aren’t reluctant because they lack insight. They are reluctant because they are afraid of owning the first mistake. The first assumption is the most vulnerable one. Once it is written down, it looks like a position, a commitment, a claim to be defended. And for many professionals — especially those who are cautious, political, or simply overwhelmed — that’s not a place they want to stand.
So, I developed a different approach. I stopped asking for the first draft of ideas and assumptions.
I started building the entire model myself — the assumptions, the structure, the logic, the forecasts — everything. I would take the best information I had, make the best reasonable assumptions I could, and produce a full version. Not a sketch. Not a preliminary worksheet. A full, working model.
Then I would send it to the very people who declined to give me assumptions and simply ask:
“Would you please critique this?”
That one sentence changed everything.
Why Critiquing Works When Creating Doesn’t
Something very human happens when someone is handed a complete model or draft of a report. The reluctance melts away. The fear of being wrong diminishes. The instinct to avoid being “first” is replaced by the instinct to correct, to improve, to clarify, to argue, to refine.
People who gave me nothing on a blank sheet suddenly became:
Detailed
Insightful
Opinionated
Protective of accuracy
Willing to explain nuances they never would have volunteered earlier
The entire room would come alive.
I used to think this was a flaw — that people should be willing to start from scratch. But then I realized the truth: starting is the hardest intellectual act in any field. Creation is vulnerable; critique is safe. The blank page is intimidating; a flawed draft is an invitation.
And here is the real secret:
People are most honest when they are correcting you.
They will tell you the real revenue figure. They will tell you why an assumption is politically impossible. They will tell you which number has never made sense. They will tell you what they truly believe once you’ve already said something they can push against.
Ironically, by giving them something to disagree with, I got the truth I was searching for.
The Picker–Pickee Method for Analytical Work
I call this my “picker–pickee” method (AI hates my term) — not in the social sense of drawing people into conversation, but in the analytical sense of drawing them into ownership. I pick the model. They pick it apart. And in that exchange, we arrive at what I needed all along:
Their actual knowledge. Their real assumptions. Their unfiltered expertise.
Without forcing them to start from zero.
Why This Technique Became One of My Career Signatures
Over time, I realized this was more than a workaround. It was a strategic advantage.
It accelerated projects.
It produced better numbers.
It revealed hidden politics and constraints.
It allowed people to save face while still contributing.
It created buy-in because the team helped “fix” the model.
It insured that the final product reflected collective wisdom, not my isolated guesswork.
I stopped apologizing for this method. I embraced it. I refined it. And eventually I came to see it as one of the most reliable tools in my entire professional life.
Because the truth is simple:
People don’t want to write the first word, but they will gladly edit the whole paragraph.
If you want real input from reluctant contributors, do the hard part yourself. Build the model. Write the draft report. Take the risk. Put the first assumptions on the page. And then ask for critique — sincerely, humbly, and openly.
They will show you what you needed to know all along.
Closing Reflection
If there is any lesson I wish I had learned earlier, it is this:
You don’t get better analysis by demanding contribution. You get better analysis by giving people something to respond to.
Once I accepted that, my work changed. My relationships with stakeholders changed. And the quality of every model I built improved dramatically.
It may not appear in textbooks, but after decades of practice, this remains one of my most effective — and most human — secrets of the profession.
On December 5, 2025, the Trump administration released its National Security Strategy, a 33-page document that invokes—and dramatically expands upon—one of the oldest principles in American foreign policy: the Monroe Doctrine. The new strategy presents itself as a restoration of hemispheric clarity, but in substance it offers something far more ambitious: a Trump-era corollary that transforms a defensive warning into an assertion of American primacy.
Understanding what this “Trump Corollary” means—and whether it represents a legitimate evolution or a radical departure—requires revisiting the original doctrine, understanding how national security strategies gain force in American governance, assessing reactions across the hemisphere, and considering how American history might have looked had this posture been adopted earlier. It also requires confronting a deceptively simple question: Can a 19th-century doctrine be meaningfully revived in a 21st-century multipolar world?
Part I: The Monroe Doctrine of 1823
Historical Context
President James Monroe articulated his doctrine on December 2, 1823, at a moment when European empires were recalibrating their power in the Western Hemisphere. Russia pressed southward down the Pacific coast. Spain hoped to reclaim Latin American colonies that had recently secured independence. The United States, barely forty years old, lacked the naval power to enforce its preferences but had a growing conviction that the Western Hemisphere required a geopolitical boundary line separating Old World and New.
Monroe and his Secretary of State, John Quincy Adams, relied on an emerging American diplomatic philosophy—one that blended George Washington’s caution against entangling alliances with James Madison’s insistence that foreign interference in the Western Hemisphere posed unacceptable risks. The British sought joint action with the United States, but Adams famously rejected it, arguing that America should not appear as “a cock-boat in the wake of the British naval man-of-war.”
The Three Core Principles
Monroe’s declaration rested on three foundational principles:
Non-Colonization — The Americas were no longer open to European colonization.
Non-Intervention — Any European attempt to impose its system in the hemisphere would be viewed as a threat.
Separate Spheres — Europe and the Americas operated under fundamentally different political logics and should remain separate.
Reciprocity and Restraint
Often forgotten today is Monroe’s reciprocal pledge:
“With the existing colonies or dependencies of any European power we have not interfered and shall not interfere.”
In other words, the doctrine was defensive, not dominative. It did not seek to revise political arrangements or expand American control. And because the United States lacked the power to enforce it, the doctrine functioned more as a diplomatic aspiration than a military guarantee—its viability propped up, ironically, by the British Royal Navy.
Latin American Perspectives
Latin America’s early reaction was complicated. While many leaders welcomed U.S. opposition to recolonization, they also recognized the unilateral nature of Monroe’s declaration. Over time, as the United States intervened repeatedly in the Caribbean, Central America, and northern South America, skepticism hardened. By the 20th century, much of Latin America viewed the Monroe Doctrine not as a shield against European ambitions but as a mask for American dominance. This historical memory forms the backdrop against which any modern revival—particularly one framed as a U.S. “right” to dictate hemispheric security—will be received.
Later Interpretations
The doctrine evolved dramatically:
Olney Interpretation (1895): Asserted U.S. authority to mediate hemispheric disputes.
Roosevelt Corollary (1904): Claimed the right to intervene in Latin American affairs to prevent “chronic wrongdoing.”
Good Neighbor Policy (1933): Pledged non-intervention, attempting to restore the doctrine’s original spirit.
Cold War Era: Revived interventionist logic to counter communism.
Thus, the doctrine became not a fixed principle but a malleable tool—sometimes restraining U.S. action, other times justifying it.
Part II: The Trump Corollary of 2025
A New Framework
The 2025 National Security Strategy sharply critiques the last 30 years of American foreign policy, dismissing post-Cold War global engagement as utopian overreach. The new governing principle is “America First”—defined not as isolationism but as a recalibration of American obligations, alliances, and priorities.
Core Philosophy
The strategy asserts that safeguarding American sovereignty requires:
“Full control over our borders”
A modernized nuclear deterrent and missile defense “Golden Dome”
Revitalization of American cultural and spiritual health
Economic growth from $30 trillion to $40 trillion within a decade
It rejects the long-standing assumption that American security depends on expansive global commitments.
Guiding Principles
Four principles structure the document:
Peace Through Strength — Deterrence through overwhelming capability.
Non-Interventionism — High thresholds for foreign wars outside the hemisphere.
Flexible Realism — Friendly commercial relations without demanding political reform.
Primacy of Nations — A world anchored in sovereign nation-states.
The Trump Corollary Defined
The document’s centerpiece is the “Trump Corollary to the Monroe Doctrine”:
“The United States will reassert and enforce the Monroe Doctrine to restore American preeminence in the Western Hemisphere… We will deny non-Hemispheric competitors the ability to position forces or other threatening capabilities.”
Where Monroe said “hands off,” Trump says “we will determine what happens here.” Where Monroe rejected joint declarations, Trump rejects even cooperative multipolarity.
The document authorizes military force against cartels, targeted deployments along the border, and the rollback of Chinese and European strategic positions in Latin America. It frames the hemisphere as a zone of exclusive American responsibility—echoing Theodore Roosevelt more than James Monroe.
Migration as National Security
The strategy’s most dramatic reframing is its treatment of migration:
“The era of mass migration must end. Border security is the primary element of national security.”
Migration is grouped with terrorism, drugs, and trafficking—an elevation far beyond Monroe’s language and even beyond the Cold War’s focus on ideology.
Confrontation with Europe
The strategy openly criticizes European allies and predicts their demographic and cultural decline. It encourages “patriotic parties” in European democracies—an unusual form of ideological interference. This is a noteworthy reversal: Monroe promised not to interfere in Europe; the Trump strategy seeks to influence European domestic politics.
China and Economic Competition
China is reframed strategically, not as an existential threat but as a commercial rival. The corollary treats Chinese presence in Latin America—ports, lithium mines, telecom infrastructure—as a red line, yet it simultaneously expresses interest in mutually beneficial trade.
This duality reflects the document’s broader tension: a desire for economic engagement with Beijing while preventing its influence anywhere in the Western Hemisphere.
Middle East and Africa
The document presents the Middle East as an emerging zone of stability and partnership, declaring an end to the era in which Middle Eastern crises consumed American attention. In Africa, it proposes replacing aid with trade, emphasizing partnerships with states that welcome American commerce.
Peace Claims
The strategy claims that President Trump achieved peace in eight international conflicts. Whether these claims will withstand scrutiny remains uncertain, but their inclusion underscores the administration’s desire to frame its approach as peace-producing rather than confrontational.
Part III: Who Must Approve or Honor These Strategies?
National Security Strategies Are Presidential Declarations
The National Security Strategy (NSS) is required by the Goldwater-Nichols Act (see appendix), but it does not have the force of law. It binds no future Congress, no court, and no ally. It is authoritative inside the executive branch, but constrained by law at every turn.
Domestic Legal Limits
Congress controls war powers and appropriations.
Treaties like NATO remain binding until formally abrogated.
Courts may block executive actions, as seen in litigation over birthright citizenship.
Posse Comitatus constrains domestic military enforcement, unless Congress authorizes exceptions.
International Law and Foreign Responses
The Monroe Doctrine has always been unilateral. No nation is obligated to honor it. Latin American states—many of which now rely heavily on Chinese investment—are unlikely to welcome a 2025 revival framed as exclusionary. Europe may resist American attempts to influence its domestic politics. China can ignore American demands that it divest from hemispheric assets.
The Trump Corollary’s success therefore depends not on diplomatic persuasion but on American capacity—economic, military, and political—to enforce it.
The Feasibility Problem
A critical analytical question emerges: Does the United States currently possess the power, resources, and political consensus needed to enforce hemispheric dominance?
Several issues complicate enforcement:
A Navy struggling to meet global commitments
A defense industrial base strained by years of underinvestment
Domestic political polarization
High federal debt limiting sustained military expansion
Latin American governments with alternative economic partners (especially China)
The Trump Corollary’s ambitions may exceed available means—an imbalance that has historically undermined doctrines that promise more than the nation can deliver.
Part IV: How Would the Trump Corollary Have Changed America Since 1960?
Counterfactual analysis reveals both the appeal and the risks of the Trump Corollary.
The Cuban Missile Crisis (1962)
Kennedy’s blockade aligned with Monroe’s principles, but his restraint—rejecting military strikes—contrasts sharply with the Trump strategy’s willingness to use lethal force to preempt threats. A Trump-style approach might have produced the airstrikes the Joint Chiefs recommended, risking nuclear escalation.
Vietnam and Cold War Interventions
A Trump Corollary framework would likely have avoided Vietnam entirely, given its skepticism of “forever wars” outside the hemisphere. Yet it might have intensified interventions in the Caribbean and Central America, where U.S. dominance was explicitly asserted.
Immigration Policy
The 1965 Immigration and Nationality Act would likely never have passed under a Trump Corollary worldview. The demographic, cultural, and economic consequences of that alternative history would be profound—yielding a more homogeneous but older and economically constrained nation.
Relations with Europe and NATO
A doctrine that treats alliances as transactional could have undermined Cold War deterrence. Europe might have developed independent nuclear forces sooner. The European Union itself may have taken a different form—or not emerged at all.
Economic Globalization
Rejecting trade liberalization would have preserved some manufacturing but at substantial economic cost. America might have had higher wages in industrial sectors but a smaller economy, reducing its ability to project power globally.
Middle Eastern Engagement
Much of America’s costly Middle Eastern involvement might have been avoided, though 9/11 demonstrated that even non-intervention cannot insulate the nation from transnational threats.
Latin America and China
A Trump Corollary applied since 1960 would have required far more sustained investment in Latin America to preempt Chinese influence—investment the United States historically has not provided.
Conclusion: Continuity, Rupture, and the Question of Endurance
The Trump Corollary is both a revival and a reinvention. Like the Monroe Doctrine, it asserts hemispheric boundaries and warns foreign powers away. But unlike Monroe, it does not promise reciprocity, restraint, or non-interference. It replaces Monroe’s defensive posture with a claim to hemispheric dominance. It critiques allies Monroe refused to criticize. It directly inserts the United States into the domestic ideological struggles of Europe. And it elevates migration—unimaginable to Monroe—as the central security issue of the age.
Ultimately, the Trump Corollary’s durability will depend on factors Monroe did not face:
a multipolar world,
a globally intertwined economy,
hemispheric partners with diversified alliances, and
a deeply polarized American electorate.
Doctrines endure only when they align national interests, national capacity, and national consensus. Monroe had that combination—though only decades later. Whether the Trump Corollary possesses the same enduring quality is uncertain.
In the end, the question is not whether the Trump Corollary represents a bold vision. It does. The question is whether it is sustainably bold—and whether future administrations will embrace or repudiate it.
Two hundred years separate Monroe from Trump. Both spoke to their time. History will determine which one better matched America’s enduring interests—and which one attempted more than the nation could ultimately sustain.
Appendix
How Presidents from Reagan to Biden Treated the National Security Strategy Requirement
The modern National Security Strategy traces to the Goldwater-Nichols Act of 1986, which attempted to bring coherence to U.S. defense planning after Vietnam, Watergate, and the early Reagan-era military buildup. The act required the president to submit a comprehensive NSS to Congress on a regular basis. Yet from its inception, the requirement carried no enforcement mechanism, no deadline penalty, and no legal force beyond the obligation to publish. Every president since has treated the NSS accordingly—not as binding doctrine, but as a statement of priorities that may or may not shape actual policy.
Ronald Reagan (1981–1989): The First to Issue an NSS—But on His Own Terms
Reagan’s administration issued the first formal National Security Strategy in 1987. It articulated themes Reagan had been voicing for years—peace through strength, rollback of Soviet influence, and the strategic defense initiative. But even Reagan’s NSS served more as a codification of existing policy than a guiding document. The administration routinely adjusted its approach to the Soviet Union as diplomacy evolved, demonstrating the NSS’s role as an informational paper rather than a directive roadmap. Reagan never treated it as binding and did not revise policies to conform to it.
George H. W. Bush (1989–1993): A Strategy Overtaken by Events
President Bush issued strategies in 1990 and 1991, but the collapse of the Soviet Union forced constant revision in practice. The Gulf War likewise illustrated how new crises often moved faster than strategic paperwork. Bush embraced the NSS as a communication tool but never suggested it constrained presidential freedom of action. Its themes—collective security, stability in Europe, and regional deterrence—reflected Bush’s worldview, but the administration’s actions consistently adapted to rapidly shifting realities.
Bill Clinton (1993–2001): From Engagement to Enlargement
Clinton’s strategies in 1994, 1995, 1996, and 1997 emphasized “engagement and enlargement,” humanitarian intervention, and the spread of democratic institutions. Yet major Clinton-era actions—including Bosnia, Kosovo, and the 1994 Haiti intervention—were justified by presidential decision-making rather than strict adherence to the NSS texts. The administration used the NSS to signal broad values and priorities but not as a constraint on improvisational foreign policy.
George W. Bush (2001–2009): A Dramatic Strategy That Didn’t Bind Policy
Bush’s 2002 NSS was one of the most consequential ever written, famously introducing:
preemptive action,
the war on terrorism, and
the goal of advancing freedom as a strategic priority.
But even this powerful document was illustrative, not binding. It did not legally authorize military operations in Afghanistan or Iraq, nor did it override congressional war powers or treaty obligations. Bush’s later 2006 NSS softened some earlier claims, demonstrating again that an NSS reflects presidential messaging rather than statutory guidance.
Barack Obama (2009–2017): Strategies that Acknowledged Their Own Limits
Obama’s 2010 and 2015 strategies emphasized diplomacy, multilateralism, and the avoidance of open-ended conflicts. Yet Obama’s major decisions—including the 2011 intervention in Libya, the decision not to enforce the “red line” in Syria, the pivot to Asia, and the Iran nuclear deal—often diverged from or expanded beyond the documents’ frameworks. Obama openly recognized that strategies evolve with circumstances, implicitly affirming that the NSS carries no binding force.
Donald Trump (2017–2021): A Strategy Unaligned With Presidential Action
Trump’s 2017 NSS described China and Russia as great-power competitors, yet Trump often pursued warmer personal relations with both Xi Jinping and Vladimir Putin than the strategy implied. The administration’s withdrawal from the Iran deal, negotiations with North Korea, and approach to NATO frequently departed from traditional strategic guidance. Trump’s first-term NSS was more muscular than his actual foreign policy in some areas and more cautious in others—another demonstration that the NSS is aspirational, not mandatory.
Joe Biden (2021–present): Treating the NSS as Optional in Practice
Biden issued Interim Guidance in 2021, a document not envisioned in the statute but accepted without question because the NSS requirement has no enforcement. His formal 2022 NSS focused on strategic competition with China and support for democratic allies. Yet Biden’s major decisions—especially the Afghanistan withdrawal and the scale of support for Ukraine—illustrated the familiar pattern: presidents act according to events and politics, not according to NSS language. Biden’s NSS explicitly stated that it “provides direction to departments and agencies,” confirming its internal, advisory nature.
The Long Arc: A Consistent Pattern
Across nearly four decades, the pattern is unmistakable:
Presidents publish the NSS because the law requires it.
They adjust its content to reflect their broader worldview.
They frequently act outside it when circumstances require.
No president has treated it as binding—or asked that it be treated as binding.
Congress, the courts, and international partners do not view it as enforceable.
The NSS is an instrument of communication, coordination, and signaling, not a constraint on presidential power or a substitute for congressional authority.
It is in this historical lineage that the Trump Corollary appears: bold in rhetoric, sweeping in intent, but ultimately limited not by its ambition but by the same structural constraints that shaped every NSS since Reagan.
You must be logged in to post a comment.