Tried and True: The Terrorism We Expect, and the High-Casualty Attack We Probably Don’t
Executive Summary
Most recent terrorist violence relies on familiar, low-skill methods, and the online messaging around it keeps encouraging copycat action. The bigger danger is different: a coordinated, multi-part attack that exploits routine systems, creates confusion, and slows response long enough to drive casualties into the thousands. The warning signs are less about exotic weapons and more about attackers learning how our public life actually works and where it breaks under pressure.
Intelligence Analysis
If you want to understand the next major mass-casualty risk, stop looking for “new” weapons and start looking for old human habits. Security planning, like everything else, tends to chase the last headline. After a vehicle attack, we think about bollards. After a stabbing, we think about screening. After a bombing attempt, we think about bags. That is necessary, but it is not sufficient. The attacks that shock a country are the ones that don’t fit neatly into the category we prepared for.
The modern terrorism environment is split in two.
One side is visible and noisy: propaganda, threats, and attacks that are easy to understand and easy to describe. A recent example is ISIS-linked online content that appears to show a backpack device framed as “ready.” Whether the images reflect a real, usable device is not the point. The point is the message: violence is simple, portable, and within reach. That kind of posting is designed to lower the psychological barrier for someone on the edge. It turns terrorism into a do-it-yourself fantasy, and it keeps the public focused on the obvious: the backpack, the knife, the vehicle.
The other side is quieter and more dangerous: the attack that looks like something else until it is too late. This is the kind of incident that becomes a national trauma, not because it used a magic weapon, but because it exploited a blind spot.
A good way to think about “another 9/11” is not the tool that was used, but the mental trap it revealed. Before 9/11, aviation risk was widely understood, but it was understood in the wrong shape. People planned for hijackings as hostage events, not as rapid mass murder using the aircraft itself. The failure was a failure of imagination, but also a failure of systems: rules, response time, coordination, and assumptions about how far an attacker would go.
That same dynamic still exists today. The next major event is likely to come from one of these gaps:
First, the handoff problem. Big systems do not fail because nobody cared. They fail because responsibility is divided. Public health sees one kind of problem. Police see another. Venue security sees another. Transportation authorities see another. Each responds competently inside its lane, and the attacker lives in the spaces between lanes. The first hour matters most, and the first hour is where confusion thrives. A truly dangerous attack is designed to make the first hour unclear.
Second, trust as a vulnerability. Many high-impact incidents do not begin with a dramatic breach. They begin with legitimate access: credentials, uniforms, supply deliveries, work orders, rentals, contracts, and routine services. The more complex the society, the more it runs on trust and volume. When attackers learn to mimic normal operations, they stop looking like “outsiders” and start looking like another Tuesday.
Third, overload as a strategy. A high-casualty attack is rarely just about the first strike. It is about what the first strike does to response: jammed roads, jammed phone lines, jammed emergency rooms, confused evacuations, conflicting information, and delayed decision-making. The most dangerous planners are not just thinking about how to hurt people. They are thinking about how to slow help.
This is where the Las Vegas investigation belongs in a forward-looking discussion, even if it ultimately turns out to be criminal negligence or illicit activity rather than terrorism. Authorities reportedly seized lab equipment and large quantities of unknown liquids and substances, collecting more than 1,000 samples for federal testing, and linked the property to the earlier Reedley lab case. The key fact is uncertainty. Officials did not publicly claim a mass-casualty plot. They said testing is needed to determine what the substances are. But strategically, it illustrates a reality that serious attackers have always understood: high-consequence hazards can exist in plain sight, inside ordinary neighborhoods, discovered by routine enforcement rather than dramatic intelligence tips.
That should change how readers think about prevention. The public tends to imagine that a “big attack” will announce itself loudly in advance. In practice, many warning signs are mundane: unusual storage, unusual purchases, unusual access requests, unusual attempts to rent space or equipment, unusual interest in back-of-house areas, unusual behavior around schedules and security routines. None of these proves terrorism. But they are exactly the kinds of signals that get missed when people only look for a masked man with a weapon.
The educational point is simple: mass casualties come from a triangle.
Crowds. Confusion. Time.
Crowds create the potential. Confusion slows recognition. Time allows casualties to scale.
This is why soft targets remain central in history. It is not because they are symbolic. It is because they are manageable to enter and hard to protect without turning public life into a fortress. The attacker does not need to defeat the state. The attacker needs to defeat the first few minutes.
So what does “anticipate and mitigate” look like when the threat is the thing we haven’t named yet?
It starts with breaking the habit of thinking in single-attack categories. Defenders should plan for blended problems: a public safety incident that becomes a security incident, a technical failure that masks violence, a routine call that turns into a multi-site emergency. The most valuable preparedness work is not predicting the exact plot. It is building systems that move fast when the situation is unclear.
It also means treating early ambiguity as a serious condition, not an excuse to wait. Many catastrophic outcomes are enabled by hesitation: “We don’t know what this is yet.” A sophisticated adversary designs the incident so that uncertainty is guaranteed. The solution is not panic. The solution is practiced, pre-agreed decision pathways that can scale response while facts are still forming.
Finally, creativity in prevention is not about inventing scary scenarios. It is about asking better operational questions than attackers expect:
Where do crowds bottleneck, and how do we release pressure quickly?
Where does communication fail when cell networks overload?
What happens if the first reports are wrong?
Which access points are trusted by default?
Which “routine” services could be abused?
What does rapid medical movement look like when roads are blocked?
Those questions are not dramatic, but they are the difference between an incident that is contained and an incident that becomes historic.
The next mass-casualty event, if it comes, will not feel “creative” in hindsight. It will feel like a series of small, fixable gaps that an attacker understood better than we did. The work now is to make sure our systems assume that someone is studying those gaps, because they are.

