Benjamin in Brussels at NATO and EU Economics

Today I am in Brussels speaking at NATO and the European Union, presenting MindGuard’s Ukrainian project and advising on how technology & bio-analytics can strengthen mental resilience.

If you are near Brussels today, reach out and I’m happy to meet!

To me it is evident that the future of AI is not just political, technological or economic, but rather it is deeply human. And the same question I am discussing in Brussels today also runs through my most recent research on the future of societal wealth, banking, and financial markets.

Big thanks to Globaleye Capital, Salus Alpha, and Patrimonium Asset Management for triple-booking me to present these findings across Germany, Zurich, and Lausanne and to UBS’ CTO Mike Dargan for inviting me to extend these conversations.

Below are the Top 5 Insights from the publication and what they mean for financial institutions integrating AI agents safely at both the technical and human level.

1. Job fears rise when technical literacy falls
→ 81% of finance professionals expect AI to eliminate jobs but only 11% believe their own role is at risk. The less someone understands AI, the more invincible they assume they are.
Example: Teams with low AI literacy delay adoption, underestimate automation risk, and then overreact when disruption hits, a pattern leadership must fix through structured education and transparent risk frameworks.

2. Value chains are digital, leadership structures are not
→ Fewer than 5% of financial institutions have a CIO or CTO on the executive board, yet 95% of their value chain is digital.
Example: A bank may invest heavily in technology but still make decisions in committees without genuine technical authority, slowing innovation and increasing operational vulnerability.

3. Exclusive data becomes the new insider advantage
→ Public AI models are becoming interchangeable. What matters is private, authenticated, proprietary data, the last defensible advantage.
Example: Banks shift from traditional money vaults to authority institutions: digital notaries that cryptographically seal transactions, issue identity keys for AI agents, and certify provenance in a synthetic-content world.

4. AI agents require verifiable autonomy, not blind trust
→ If AI agents are ever allowed to act autonomously in economic systems, they must be verifiable without being transparent. Zero-Knowledge Proofs (ZKPs) offer exactly that: the ability to prove correctness without revealing models or sensitive data.
Example: An AI agent executing a trade can prove it followed risk rules, cryptographically, without exposing client data or internal model weights.

5. The edge goes to teams that blend human judgment with machine leverage
→The future belongs to institutions that combine private data assets, strategic governance, and AI-native execution.
Example: Hybrid teams, domain leaders, risk experts, and AI agents working in orchestrated cycles outperform those that treat AI as automation rather than augmentation.

As I speak today at NATO and the European Union, the message becomes even clearer:
Across humanitarian aid, defense, and autonomous finance, technology must be designed to strengthen people, not replace their judgment, dignity, or resilience.

Scroll to Top