Large Cap Stocks Blogs and Articles
  • All Technology
  • AI
  • Autonomy
  • B2B Growth
  • Big Data
  • BioTech
  • ClimateTech
  • Consumer Tech
  • Crypto
  • Cybersecurity
  • DevOps
  • Digital Marketing
  • Ecommerce
  • EdTech
  • Enterprise
  • FinTech
  • GovTech
  • Hardware
  • HealthTech
  • HRTech
  • LegalTech
  • Nanotech
  • PropTech
  • Quantum
  • Robotics
  • SaaS
  • SpaceTech
AllNewsDealsSocialBlogsVideosPodcastsDigests

Large Cap Stocks Pulse

EMAIL DIGESTS

Daily

Every morning

Weekly

Tuesday recap

NewsDealsSocialBlogsVideosPodcasts
HomeInvestingLarge Cap StocksBlogsTMTB Morning Wrap
TMTB Morning Wrap
American StocksLarge Cap Stocks

TMTB Morning Wrap

•February 23, 2026
TMT Breakout
TMT Breakout•Feb 23, 2026
0

Key Takeaways

  • •OpenAI aims $600B compute spend by 2030
  • •$1.4T figure reflects broader CAPEX commitments, not OPEX
  • •Stargate JV stalled; no data center progress
  • •Wells Fargo lifts Alphabet PT to $387, citing compute lead
  • •AI compute capacity becomes decisive competitive factor

Summary

OpenAI disclosed it now targets roughly $600 billion in compute spend through 2030, a revision from the earlier $1.4 trillion infrastructure commitment that blended CAPEX and OPEX. The company’s push for additional AI‑compute power is hampered by the stalled Stargate joint venture with Oracle and SoftBank, which remains unfunded and leaderless. Meanwhile, Wells Fargo upgraded Alphabet to overweight and raised its price target to $387, citing Google’s aggressive AI‑compute capacity expansion. Together, these moves highlight a tightening race for AI infrastructure and capital allocation.

Pulse Analysis

OpenAI’s recent filing reveals a $600 billion compute budget for the 2026‑2030 period, separating operational spend from the broader $1.4 trillion capital‑expenditure pledge that includes partner investments from Azure, AWS, Oracle and SoftBank. By isolating OPEX, the firm signals a more disciplined funding approach, yet the sheer magnitude underscores the escalating cost curve of training next‑generation models. Investors are now watching how OpenAI balances cash flow with the need for ever‑larger GPU clusters, a dynamic that could reshape venture financing in the generative‑AI space.

Compounding the funding challenge, the three‑way Stargate venture—intended to deliver dedicated OpenAI data centers—has stalled. Sources describe a leadership vacuum and unresolved governance between OpenAI, Oracle and SoftBank, leaving the partnership without a clear roadmap. This setback forces OpenAI to rely more heavily on existing public‑cloud contracts, potentially inflating per‑compute costs and limiting custom hardware optimization. The episode also illustrates a broader industry lesson: even well‑capitalized AI firms struggle to secure bespoke infrastructure without aligned incentives among cloud providers and investors.

Across the AI landscape, Alphabet’s recent upgrade by Wells Fargo reflects a contrasting narrative. Google’s Project Google aims to boost AI‑compute capacity to 35 GW by 2028, more than doubling its 2025 footprint. This aggressive scaling, combined with deep user data and a global distribution network, positions Alphabet as a de facto AI platform for both consumer and enterprise workloads. As hyperscalers vie for compute supremacy, capacity becomes the decisive moat, influencing everything from model training speed to pricing power in the burgeoning AI services market.

TMTB Morning Wrap

Read Original Article

Comments

Want to join the conversation?