Policymakers and businesses must temper expectations of AI as a crystal‑ball tool, integrating human judgment to avoid strategic blind spots. The trajectory of trust, regulation, and technology will shape competitive advantage across sectors.
Current generative models are fundamentally probabilistic, designed to predict the next token rather than future events. This architecture limits their ability to forecast complex geopolitical shifts, where human decisions, sudden crises, and non‑linear dynamics dominate. While AI can surface hidden patterns in large datasets, relying on it as a crystal‑ball for policy risks overlooking critical context and creative reasoning that only humans provide. A hybrid approach—combining AI‑driven trend detection with expert judgment—offers a more resilient decision‑making framework for governments and corporations alike.
Looking forward, AI’s growth trajectory is hitting practical ceilings. Exponential gains in model size and capability demand ever‑greater compute power and energy, raising sustainability concerns and potentially curbing unchecked scaling. Simultaneously, public trust is eroding; surveys show a majority of Americans doubt AI outputs are fair, a sentiment that could trigger regulatory backlash and reduced capital flow. Nations are responding with sovereign AI strategies, seeking domestic control over data, security, and economic benefits while still relying on global talent and infrastructure. Emerging research on "world models"—systems that predict actions rather than words—promises to extend AI beyond chat interfaces into robotics and autonomous decision‑making, marking a pivotal shift in its societal impact.
For industry leaders, these dynamics translate into strategic imperatives. Investment should prioritize modular, purpose‑built models that integrate securely with proprietary databases, reducing reliance on monolithic, opaque services. Organizations must embed AI ethics and transparency into development pipelines to rebuild confidence and meet emerging regulations. Most critically, cultivating interdisciplinary teams that blend data scientists with domain experts will enable the kind of hybrid forecasting highlighted by the Atlantic Council, ensuring AI augments rather than replaces human insight as the economy and geopolitics evolve.
Three years since ChatGPT launched, a combination of hype and fear has made it hard to think clearly about our new age of artificial intelligence (AI). But AI has the potential to change the world—from energy to geopolitics to the global economy to the very production and application of human knowledge. If ever we needed clear‑eyed analysis, it’s now.
At the Atlantic Council, our experts in the Atlantic Council Technology Programs spend a lot of their time thinking about how AI will shape our future—and they have the technical literacy essential to the task. So, as part of our annual Global Foresight report on the decade to come, we asked them our most pressing questions:
How will AI evolve over the next ten years and beyond?
How can we use AI to forecast global affairs?
And—let’s be real—will this thing replace us?
Then our experts put AI chatbots through their paces, presenting them with questions from our Global Foresight survey of (human) geostrategists and foresight practitioners about what the world will look like by 2036. Below are the edited and condensed highlights from our conversations.
“I would not trust today’s AI systems to reliably forecast global affairs. I think that comes down to the fact that, so often, global events don’t follow predictable patterns. That’s because so much of global geopolitics is driven by human decisions.”
— Tess de Blanc‑Knowles, senior director of Atlantic Council Technology Programs
“When you’re asking AI to predict the future, you’re asking it a big, unbounded question. What large language models (LLMs), which is what the current generation of generative artificial intelligence is built on, are good at doing is next‑word or next‑token prediction.”
— Trey Herr, senior director of the Atlantic Council’s Cyber Statecraft Initiative
“Right now, I think any policymaker would be very poorly served by, say, pulling up an LLM and asking, ‘What’s going to happen next?’ That’s not really the strength of these modern systems.”
— Emerson Brooking, director of strategy and resident senior fellow at the Atlantic Council’s Digital Forensic Research Lab
“It is not a crystal ball. In technical terms, AI is probabilistic. It is not predictive or deterministic. A fundamental barrier for artificial intelligence is that it cannot experience the real world… We can, however, envision a world where AI models and human forecasters work together to make better predictions.”
— Trisha Ray, associate director and resident fellow at the Atlantic Council’s GeoTech Center
“If you asked an AI system to predict the outcome of the Super Bowl, you could equip the model with data from past seasons, the teams, the performance of the players, and the trajectory of those teams over the course of the season… But the system is not going to be able to predict that rogue tackle that creates a season‑ending injury for a star player, or the interpersonal dynamics among the team that can either supercharge their pathway to the championship or totally derail it.”
— Tess de Blanc‑Knowles
“AI’s limitation is that it cannot produce new information. It can’t expand the universe of knowledge that we’re currently training on. What it can do is identify novel insights, identify trends that may have taken humans a lot of time to manually produce or see.”
— Graham Brookie, Atlantic Council vice president for technology programs and strategy
“Today’s AI systems are well‑suited for predictive tasks where there are stable patterns and there’s a good amount of historical data to train the systems on. So this bears out in near‑term weather prediction, traffic patterns, predicting maintenance needs for an airplane or some other complex manufacturing system.”
— Tess de Blanc‑Knowles
“The growth of AI capability over the past few years has essentially been predictable: it continues to increase exponentially as we devote exponentially more processing power and energy to its needs. But that can’t go on indefinitely. I think soon there will be something that feels like a ceiling.”
— Emerson Brooking
“The bubble that is this market is going to pop, and we’re going to see some of these firms fail. You’re going to see others rise up and succeed… The side effect of that is likely that there is a lot of infrastructure, a lot of computing resources, a lot of talent that’s suddenly available and looking for work and looking for ways to be useful. And that kind of thing can be a really powerful driver of innovation.”
— Trey Herr
“Another very significant risk to the progress of AI is trust. In the United States, recent polls have shown that 60 % of American adults don’t trust the output of an AI system to be fair and unbiased… If that baseline distrust is followed by a series of accidents or destructive news around AI, consumers will lose confidence, businesses will assume higher risk, and investment will cool.”
— Tess de Blanc‑Knowles
“You could imagine in ten years an absolutely fantastical, extremely powerful tool assisting you in every aspect of daily life and essentially knowing what you want at all times. But my greater concern, if that is the future, is who will have access to this tool? Because for AI tools to be this capable, they will be immensely energy‑intensive and extremely expensive. I wonder how much longer the current focus on making AI accessible to as many people as possible will last.”
— Emerson Brooking
“We’re seeing a lot of attention today on building what are called ‘world models.’ Instead of predicting the next word, these models are predicting the next action in the world. If we move in that direction, we’ll see the true impact of AI across society by breaking AI out of the computer interface into robotics that can take on more tasks.”
— Tess de Blanc‑Knowles
“In the future we may see a larger application of small language models built for specific purposes and hooked up to relevant databases, so that when you log in you’re using a geopolitical chatbot rather than a general‑purpose tool. We’re a ways off from that, though.”
— Trey Herr
“One of the trends I would look out for in 2026 is countries going all‑in on sovereign AI. The principle driving this trend is simple: governments want to control AI before it controls us. Sovereign AI is defined by four characteristics: (1) adherence to national laws, (2) national security, (3) economic competitiveness, and (4) value alignment. But it’s not possible for a country to build the entire AI stack indigenously.”
— Trisha Ray
“Two trends to watch: (1) the continuing sophistication of tools, especially the context window. When ChatGPT launched the context window was about 4,000 characters; a year later it was 100,000; today some consumer‑grade models have up to 2 million. Exponential growth in that window could push us toward artificial general intelligence. (2) financing, political conditions, and energy costs. If AI companies hit a valuation ceiling, shocks could reverberate through the whole system and affect development.”
— Emerson Brooking
“AI is changing the way we interact with things we touch every day. In ten years I think we’ll see more AI in commercial, security, and warfare landscapes. It’s highly likely we’ll achieve general artificial intelligence, though that doesn’t necessarily mean killer robots will govern us.”
— Graham Brookie
“What we’re seeing is one of the most significant changes in digital technology in the last fifty years, probably since the creation of the personal computer. Before the PC you had to go to an institution for computing time. Personal computers put that power in individuals’ hands. AI is doing the same thing—putting the ability to do complex research and knowledge production into every person’s hands.”
— Trey Herr
“Artificial intelligence, the way we use it now, is not transformational yet. I would say AI is more a continuation of the digital revolution. It’s exciting, but not society‑shaking yet. The industrial revolution changed how we live, work, and politics; AI is not at that stage yet.”
— Trisha Ray
“Humans can think. Generative AI models can’t think. That’s a crucial distinction. It’s easy to anthropomorphize something that chats with you.”
— Trey Herr
“Humans understand context, cause and effect, and have creativity to think through scenarios not present in prior events. AI systems cannot creatively think of a new event.”
— Tess de Blanc‑Knowles
“As AI tools become normalized, we may rely on them for higher‑order thinking, outsourcing critical reasoning to machines that are limited by the data they were trained on. That could trap us in a recursive loop where the future horizon narrows to what the machine tells us is possible.”
— Emerson Brooking
Comments
Want to join the conversation?
Loading comments...