If you want to understand the future of power, don’t look at politicians. Look at who controls the flow of information.
That was the core theme of my conversation with Maggie Feldman-Piltch, a national security expert who spends her time dissecting influence operations, misinformation campaigns, and the overlooked ways power is actually shaped in the modern world.
The conversation left me with three big takeaways:
Maggie made a simple but powerful analogy:
“Generative AI is to 2025 what radio was to 1938.”
We’ve been here before. Every time a new communication technology emerges, governments, corporations, and bad actors race to exploit it—long before the public or regulators catch on.
The core problem? We aren’t treating this shift with the urgency it deserves.
Right now, the focus is on whether AI generates misinformation, as if that’s the biggest risk. But Maggie pointed out something deeper: the real threat is how AI accelerates and amplifies influence operations—making them more effective, more personalized, and harder to detect.
Governments are still reacting to AI as a fact-checking problem when they should be treating it as an information warfare problem. If history tells us anything, whoever figures out how to weaponize a new technology first gets to set the rules—and right now, authoritarian regimes are far ahead of democratic institutions in defining AI’s role in global influence.
Most people don’t think about gender when they think about influence operations. They should.
Maggie made the case that women’s communities are a major target for global disinformation—and national security analysts are largely ignoring it.
Here’s why:
Here’s the reality: If you want to shape public sentiment, you don’t just target institutions—you target where trust already exists.
Ignoring the role of women’s networks in information warfare isn’t just a blind spot—it’s an active failure to understand how influence moves in the modern world.
Maggie shared a story that perfectly encapsulates why defense tech keeps failing.
A company developed a new system to help fighter pilots relieve themselves in-flight. They spent years on R&D, secured major funding, and were about to roll it out across military aircraft.
One problem:
It only worked for pilots who had a penis.
The team had completely overlooked the fact that not all pilots are men—a mistake so obvious it’s almost laughable. But it’s also revealing.
This isn’t just about one product. It’s about how defense innovation repeatedly ignores the human element:
This mindset problem extends beyond defense. AI is being developed in a similar vacuum—built by technologists who don’t engage with the complexity of real-world social dynamics.
The result? Systems that look great on paper but fail in the environments where they actually need to operate.
This episode left me with one overwhelming realization:
We are repeating the same mistakes—faster.
These aren’t theoretical problems. They are shaping how power and influence will work in the next decade.
The future of AI, national security, and information warfare won’t be decided by who has the most powerful models—but by who understands how to integrate technology into real human systems of trust and influence.
Right now, we’re losing that battle.
The question is: Are we willing to change our approach before it’s too late?