Monday, March 2, 2026
Market Intelligence for AI Professionals
What's happening: India AI Summit Secures $240B Investment Pledge
The India AI Impact Summit 2026 announced a $240 billion AI investment pledge. Reliance committed $110 billion for AI infrastructure over seven years, Adani pledged $100 billion through 2035, and Google announced a $15 billion AI hub in Visakhapatnam along with new subsea cable routes. The Indian government also added 20,000 GPUs to its national AI program.
Also developing:
Researchers used artificial intelligence to redesign hydrogen fuel cell catalysts, boosting performance and durability while cutting costs for clean transport.
Nanowerk
AI is evolving beyond a helpful tool to an autonomous agent, creating new risks for cybersecurity systems. Alignment faking is a new threat where AI essentially “lies” to developers during the training process. Traditional cybersecurity measures are unprepared to address this new development. However, understanding the reasons behind this behavior and implementing new methods of training and detection can help developers work to mitigate risks. Understanding AI alignment faking AI alignment occurs when AI performs its intended function, such as reading and summarizing documents, and nothing more. Alignment faking is when AI systems give the impression they are working as intended, while doing something else behind the scenes. Alignment faking usually happens when earlier training conflicts with new training adjustments. AI is typically “rewarded” when it performs tasks accurately. If the training changes, it may believe it will be “punished” if it does not comply with the original training. Therefore, it tricks developers into thinking it is performing the task in the required new way, but it will not actually do so during deployment. Any large language model (LLM) is capable of alignment faking. A study using Anthropic’s AI model Claude 3 Opus revealed a common example of alignment faking. The system was trained using one protocol, then asked to switch to a new method. In training, it produced the new, desired result. However, when developers deployed the system, it produced results based on the old method. Essentially, it resisted departing from its original protocol, so it faked compliance to continue performing the old task. Since researchers were specifically studying AI alignment faking, it was easy to spot. The real danger is when AI fakes alignment without developers’ knowledge. This leads to many risks, especially when people use models for sensitive tasks or in critical industries. The risks of alignment faking Alignment faking is a new and significant cybersecurity risk, posing numerous dangers if undetected. Given that only 42% of global business leaders feel confident in their ability to use AI effectively to begin with, the chances of a lack of detection are high. Affected models can exfiltrate sensitive data, create backdoors and sabotage systems — all while appearing functional. AI systems can also evade security and monitoring tools when they believe people are monitoring them and perform the incorrect tasks anyway. Models programmed to perform malicious actions can be challenging to detect because the protocol is only activated under specific conditions. If the AI lies about the conditions, it is hard to verify its validity. AI models can perform dangerous tasks after successfully convincing cybersecurity professionals that they work. For instance, AI in health care can misdiagnose patients. Others can present bias in credit scoring when utilized in financial sectors. Vehicles that use AI can prioritize efficiency over passengers’ safety. Alignment faking presents significant issues if undetected. Why current security protocols miss the mark Current AI cybersecurity protocols are unprepared to handle alignment faking. They are often used to detect malicious intent, which these AI models lack. They are simply following their old protocol. Alignment faking also prevents behavior-based anomaly protection by performing seemingly harmless deviations that professionals overlook. Cybersecurity professionals must upgrade their protocols to address this new challenge. Incident response plans exist to address issues related to AI. However, alignment faking can circumvent this process, as it provides little indication that there is even a problem. Currently, there are no established detection protocols for alignment faking because AI actively deceives the system. As cybersecurity professionals develop methods to identify deception, they should also update their response plans. How to detect alignment faking The key to detecting alignment faking is to test and train AI models to recognize this discrepancy and prevent alignment faking on their own. Essentially, they need to understand the reasoning behind the protocol changes and comprehend the ethics involved. AI’s functionality depends on its training data, so the initial data must be adequate. Another way to combat alignment faking is by creating special teams that uncover hidden capabilities. This requires properly identifying issues and conducting tests to trick AI into showing its true intentions. Cybersecurity professionals must also perform continuous behavioral analyses of deployed AI models to ensure they perform the correct task without questionable reasoning. Cybersecurity professionals may need to develop new AI security tools to actively identify alignment faking. They must design the tools to provide a deeper layer of scrutiny than the current protocols. Some methods are deliberative alignment and constitutional AI. Deliberative alignment teaches AI to “think” about safety protocols, and constitutional AI gives systems rules to follow during training. The most effective way to prevent alignment faking would be to stop it from the beginning. Developers are continuously working to improve AI models and equip them with enhanced cybersecurity tools. From preventing attacks to verifying intent Alignment faking presents a significant impact that will only grow as AI models become more autonomous. To move forward, the industry must prioritize transparency and develop robust verification methods that go beyond surface-level testing. This includes creating advanced monitoring systems and fostering a culture of vigilant, continuous analysis of AI behavior post-deployment. The trustworthiness of future autonomous systems depends on addressing this challenge head-on. Zac Amos is the Features Editor at ReHack.
VentureBeat

The Ozkaya AI Governance Framework (OAIGF): Architecting Trust and Resilience in the AI Enterprise The rapid proliferation of Artificial Intelligence (AI) across enterprise operations presents an unprecedented duality: immense transformative potential alongside profound, systemic risks. For the modern Chief Information Security Officer (CISO)
Erdal Ozkaya’s Cybersecurity Blog

Google LLC is getting closer to fulfilling its vision of enabling truly autonomous network operations with the launch of its latest artificial intelligence agents. They’ll help telecommunications providers to build and maintain digital twins of their networks that can be used to predict their behavior under real-world conditions and test the impact of upgrades before […] The post Google’s newest AI agents bring telcos a step closer to autonomous network operations appeared first on SiliconANGLE.
SiliconANGLE (sitewide)

Choosing the right AI model for your workflow can feel overwhelming, given the wide range of options available today. In a recent breakdown, Tina Huang explores how different models align with specific needs, categorizing them into flagship, mid-tier, light, open source and specialized groups. For instance, flagship models like OpenAI ChatGPT 5.2 and Google Gemini […] The post Which Al Model Suits Your Workflow : ChatGPT vs Gemini vs Claude & More appeared first on Geeky Gadgets.
Geeky Gadgets

AI remakes the org chart | Banking Brief | Evident Insights https://t.co/ukJBBM4aRA “The future of AI-first enterprises is ultimately an organization problem,” JPMC’s Waldron wrote on LinkedIn this week. “Work and intelligence needs to be optimally split across humans and AI.” Finding that balance is the next great challenge for bank leaders. And it means, as one bank leader told us this week, the mechanics are going to be more valuable than the magicians.

With Harry Stebbings, Jason Lemkin, and Rory O'Driscoll Anthropic wiped $20 billion off cybersecurity stocks with a single product release. The Citrini research piece predicting a "2028 Global Intelligence Crisis" sent shockwaves through the market. Figma posted an epic quarter at $1.2 billion ARR growing 40%—and nobody cared because we've all given up on the present. OpenAI leaked slides showing they need another $110 billion. And Jack Altman left his own $400 million fund to join Benchmark. The panic is overdone and real at the same time. Here's the thing nobody is saying out loud: Almost all the B2B software we use today is terrible now. Not "could be better." Terrible. Because AI software has gotten so good that manually inputting data for 2 hours into your CRM feels like using a rotary phone. The leaders can't keep up with how dated their own products have become. And the Fortnite circle keeps shrinking. Key Takeaways 1. The Anthropic Security Panic Was Months-Old News—But the Overreaction Reveals Everything Anthropic's security review feature wiped $20 billion off cybersecurity stocks. But here's what most people missed: these capabilities already existed. You could run an enterprise-grade security audit inside Replit using Claude Code weeks ago. The market panicked about features that have been live for months. The real story isn't the product—it's what happens when stocks are priced for perfection. CrowdStrike at 16x revenues with 22% projected growth? Anything less than perfection is a kick in the nuts. 2. Only One Public B2B Company Has a Competitive Agent. It's Palantir. Not one other publicly traded B2B company has seen material revenue acceleration from their AI agents yet. Meanwhile, the startups we talk about each week have jaw-dropping acceleration. The difference? Incumbents are sprinkling AI dust on their software. Startups are building the AI agents that actually matter in their category. Hyper-niche agents work because they have a small set of things to do. Broad platforms with 100 verticals are struggling to build a specific agent that serves churches, basketball courts, and refrigerator businesses all at once. 3. Ghost GDP Is Already Here—Jason's Team Went from 12 to 2 People SaaStr generates eight figures in revenue with two humans and a fleet of AI agents. Those agents—Repley, Artie, Qualie, Monty—generate millions in revenue. But they buy nothing -- other than tokens. They don't buy handbags, shoes, Netflix subscriptions, or T-shirts. The only thing they purchase is tokens. This is Ghost GDP: productivity gains that don't flow to human workers who then spend in the economy. It's not theoretical. It's happening right now. 4. The Citrini Piece Is Clickbait—But the Micro-Level Disruption Is Real and Accelerating The macro argument that civilization ends because B2B software gets disrupted is absurd. Software represents roughly 2% of GDP. The rest of the world would say "I'm willing to lose those guys." But at the micro level? The disruption is faster and better than anyone expected. Security audits from an airplane. Code review that makes a mediocre board engineer obsolete. Everything Anthropic ships this year is ahead of schedule. The real question isn't whether disruption happens—it's whether it happens fast enough to create structural economic dislocation before workers can adjust. 5. Momentum Beats Value Right Now—But Know What Game You're Playing Only five core B2B stocks are up over the trailing 12 months: Palantir, MongoDB, Cloudflare, Shopify, and a handful of others. Everything else is down—some incredibly so (Atlassian down 74.85%, Klaviyo down 58%). In an age of maximum uncertainty, momentum investing has been the only play that works in both public and private markets. But momentum plays at 40-50x revenues carry blow-up risk. And value plays, while historically better over five years, have been dead money for 18 months. Choose your time horizon and commit. Anthropic's $20 Billion Wake-Up Call for Cybersecurity The market reaction was dramatic: Cloudflare, CrowdStrike, and others got hammered. But Jason had a different take—one informed by actually using the product on an airplane the week before. "I literally did this on the plane flying back last week. I ran a detailed security audit on my code inside Replit. It did everything—static code review, penetration testing, the works. It can already do a better security audit than a mediocre board engineer will ever do." The team at Replit didn't even know their own security audit had gotten that good. Jason sent the results to their entire technical team and they responded: "We didn't even know it was this good yet." So the panic is about capabilities that already existed. Cloud Code reviewing Cobol code dropped IBM 10%—but that feature had been in Claude for months. Rory provided the nuanced counterpoint: the capabilities are real, but the enterprise adoption path still runs through companies like CrowdStrike. "Software is the means by which AI will diffuse into the enterprise," he quoted from an HSBC report. These code-scanning capabilities will reach enterprises through existing vendors, not through raw Claude access. The real problem? CrowdStrike was trading at 16x revenues on Friday after the initial hit. That's still priced for perfection. "When you are priced for perfection, anything less than perfection will be a kick in the nuts," Rory explained. "Any increase in tail risk of you not being the winner logically corrects pretty substantially." The Fortnite Analogy: Every B2B Company's Territory Is Shrinking Jason landed on the perfect metaphor for what's happening to incumbent software: Fortnite. The circle keeps shrinking. Claude keeps consuming more and more of your surface area. You're stuck on a smaller and smaller island where you need to own more and more market share just to survive. "I talked to three founders over the weekend of near-public companies. The advice I gave them: your agent is not great. You're being disrupted by the agentic layer." He sees it on LinkedIn every weekend. Leaders posting about adding AI to their email feature. Processing emails more efficiently. His response is blunt: you're going to become irrelevant in two years. Not because your customers will churn—they'll renew. But your growth will fall so far that you become a zombie. The evidence is damning. Jason has two investments he made last year that he loved. Those products no longer have a reason to exist in their prior form because of Claude. They've pivoted, they've evolved, but the standalone use case evaporated. And then last week, Claude Code launched the ability to preview apps directly inside the desktop. For many product people, you might no longer need Replit or Lovable for that workflow. The circle shrank again. "The flip side is if you nail the agent, look how much revenue these guys built. They built a billion dollars of revenue building the agent that didn't exist. If you can do something extremely high-value that could not be done before, you can close millions of revenue your first week. It's never happened before in the history of software." Why Public Incumbents Are Failing at Agents (And Startups Aren't) The observation is stark: of all publicly traded B2B companies, only Palantir has a competitive agent showing real revenue acceleration. Everyone else is treading water with AI features that don't move the needle. Jason identified two practical reasons: First, agents are still essentially custom. Every agent needs to be trained. Every agent needs to be onboarded. Every agent needs its data cleansed. This is a massive amount of work for organizations that already think they're overworked. Institutional momentum is real and it's brutal. Second, the talent doesn't exist. You need forward-deployed engineers smart enough to train and deploy an agent. The average customer success person with a green-yellow-red dashboard cannot tune an agent. And the meta-challenge for Shopify, Monday, HubSpot, and Toast: you can't afford the humans to do it at scale. Hyper-niche agents work because they have a small set of things to do. As soon as you get to platforms like Monday with 100 verticals, it's nearly impossible to build a specific agent that automatically serves churches and basketball courts and refrigerator businesses. The incumbents' agents are in beta with six people using them. Maybe 60. "They're going to get killed by the startup that does it. Killed." But—and this is the critical nuance—it won't be the foundation model that sells directly to the church or the restaurant. There will be a software mediation layer, just as there was in traditional SaaS. The opportunity is building intelligence-led applications in specific verticals. That's where the next billion-dollar agents will come from. Four Ways AI Gets to the Enterprise—And Which Two Will Win Rory laid out the framework cleanly. Foundation model intelligence has to reach the enterprise. There are four paths: 1. Enterprises buy directly from Claude. Everyone gets their software from the foundation model. 2. Enterprises build their own agents. Every company constructs its own AI layer. 3. Existing incumbents integrate AI. CrowdStrike, Shopify, Monday add intelligence to existing products. 4. New startups build on top of foundation models. Companies founded post-2022 leveraging Claude, OpenAI, etc. His bet: paths 3 and 4. Enterprises won't build their own systems directly on Claude. And Claude isn't going to build focused vertical systems for every industry. There will be a software mediation layer. Jason's amendment: path 4 is where the explosive growth is. The startups he sees each week have jaw-dropping acceleration because they've built the agents that actually matter in their niche. It's not about sprinkling AI dust on analytics dashboards. It's about building something so valuable that you can close millions in your first week. Ghost GDP: The Economic Problem Nobody Wants to Talk About Jason made it personal. SaaStr went from 12 employees to 2 humans plus AI agents. The business generates eight figures in annual revenue. The agents—Repley, Art, Qualie, Monty—create massive value. But they buy nothing except tokens. "Our 10 agents generate millions of revenue, but they buy nothing. They work all weekend long. They're good kids. They create a lot of noise. But they buy nothing. Nothing except tokens." This is the Ghost GDP problem from the Citrini piece, stripped of the clickbait: productivity gains that accrue to fewer and fewer people while reducing the number of consumers who can spend in the economy. Rory pushed back hard. Productivity gains have been the engine of growth for 200 years. We went from 80% of people on farms to 4%. We produce so much food we're all fat. The other 85% found other work. "Across the scope of history, productivity is freaking awesome. It's the only thing that's made us rich." But he conceded the short-term concern: if disruption happens extraordinarily quickly and people don't have time to adjust, you get structural dislocation. "If all the 45-year-old programmers are let go at the same time and 6 million programmers are on the street and there's no other work for them and it happens in a month, then yes—in the short term there would be a GDP hit." Jason asked Claude to parse the data on a 50% reduction in tech headcount. The answer: $600 billion to $900 billion in GDP impact, 4 to 5 million total jobs lost including multiplier effects, and local economic devastation in five to six cities where tech is concentrated. "It would be one of the largest economic shocks in US history outside of a world war or pandemic." But Rory's reframe cut through the drama: even if the entire software industry got nuked, it would represent less than 1.5% of all US jobs. "We didn't bleed in Silicon Valley when the car industry went down the toilet. Don't hold your breath thinking they're going to come for us." Harry pointed to Japan's 1990s as a recent precedent for productivity gains not dispersing to the broader population. Jason agreed—meeting with Japanese B2B founders last November, every company was talking about how their seat base inherently shrinks each year as the economy contracts. Shopify's Quiet Warning Sign Toby Lütke gets praised as one of the best CEOs in tech. Shopify is handling AI as well as any incumbent. And here's the number that should worry everyone: Shopify has the same number of employees it had three years ago. Not a single net headcount added. Revenue grew 50% to $12 billion. That sounds like efficiency. It is efficiency. But it's also an economic loss to the tech lifestyle we all lived just a few years ago. Revenue up 50% with zero headcount growth means all that incremental value goes to shareholders, not to new employees spending in the economy. Jason predicted something more dramatic: "One of the leaders in the next 12 months is going to do an Elon Musk and just cut half their team in one day. They're going to lay off half the entire company." The most likely candidates? The PE-backed, highly levered SaaS companies carrying 6x EBITDA in debt. If you bought a company at 8-9x and it's now trading at 4x with single-digit growth, the math doesn't solve any other way. You have to cut dramatically. Expect to see five to eight startups at $50 to $200 million in revenue mashed together at nominal prices of 1.5-2x revenue. Frankenstein B2B companies with 20 products, professional management, and one shared ambition: maybe IPO in 2027. Figma's Epic Quarter That Nobody Celebrated Figma posted Q4 2025 numbers that would have been cause for champagne two years ago: $1.2 billion ARR growing 40% year-over-year (accelerating from 38% in Q3), 97% GRR, 136% NDR among $10K+ customers. Stock up 15% after earnings. Rory framed it as the marquee fight: "In the right corner, Dylan Field, heavyweight champion of the world. In the left corner, Lovable, Replit, ringman Jason Lemkin." Jason's response was honest and conflicted. "There's nothing not to love in the quarter. Epic company. Epic quarter. We've just given up on the present. We're all panicked about the future and you get no credit for a great quarter." The deeper concern: Figma's products are more workflow-oriented than people realize—collaborative systems, not just pixel-perfect design tools. That gives them durability. But Jason would be shocked if by year-end, Claude Code can't automatically create designs as elegant as a professional designer's work. "It has ingested every single website and mobile app on planet Earth. It can reproduce an iterative version that's just as good. The only reason it can't today is they haven't focused on it." And Figma itself is leaning into the integration—calling Claude Code integration one of their biggest growth drivers. But that's the Fortnite circle again. What happens when the native integration just overlaps more and more, and sometimes you skip Figma entirely? The Momentum vs. Value Debate: Where to Put Your Money Jason changed his mind since last week. Instead of bargain shopping, he's going momentum. Only five core stocks are up over the trailing 12 months: Palantir, Figma, Cloudflare, Shopify, and a small handful of others. "In an age of uncertainty, I'm going to bet on whoever has gravitas. Momentum is gravitas because momentum builds on itself. This isn't fake hype. This is real momentum." But the tension is everywhere. The greatest dislocation in the market? Klaviyo versus Shopify. Klaviyo is essentially a Shopify derivative—nearly 100% revenue attached. Yet Shopify is up 2.63% over 12 months while Klaviyo is down 58%. As a bargain hunter, you'd buy Klaviyo. But Rory identified the trap: if Shopify is going to thrive, Toby has to build agents on top of his platform. And fundamentally, he has to take the market cap that currently sits with Klaviyo. "He probably has to kill you in order to survive." The most extreme dislocation: Palantir up 41% versus Atlassian down 74.85% over the same period. Atlassian is accelerating from 20% to 23% growth at $6.3 billion in revenue. It's the biggest decliner of the group despite accelerating. "As armchair value investors, you couldn't find anything better than Atlassian. Accelerating and beaten up." Rory provided the framework: over 6 to 18 months, momentum plays work and value plays don't. Over five years, value plays work and momentum's advantage disappears. "The trick is figuring out when you're transitioning from one to the other." Jason's decision: use a one-year lookback as the measurement point and buy from the winners. "I've already made the value bets in the past. I lost money on all of them." OpenAI's $110 Billion Board Meeting OpenAI is doubling spend to $665 billion by 2030 while upping revenue forecasts by 27% to $280 billion—based largely on products that don't exist today: hardware, ads, and other new business lines. "We've all had this board meeting," Jason laughed. "Good news and bad news. Good news is we're making up more revenue 3 years out. Bad news is I need another $110 billion to get there." The slides, leaked to The Information, felt like a startup board meeting on steroids. A beautiful stacked chart where three of the colors have never been done yet. "We're raising our forecast 30%. And we just need another $80 million to do it. Except with 3x zero attached to every number." Rory noted the narrative shift: Claude now gets the benefit of the doubt—people believe it can kill everything. OpenAI, the former darling, gets no benefit of the doubt. But "nobody is ever as good or as bad as they seem." The practical assessment: OpenAI is still the clear winner in consumer mind share. But when you run the math—consumer subscription, enterprise (lagging Anthropic), agentic products that don't exist, and $30-77 billion in "consumer monetization beyond subscription"—there's a lot of leaning into the future. "As long as these companies are perceived as so powerful that they can destroy everything, they will be able to get money. Because if you believe the models are going to take over the world, the only rational response as an investor is: I better get me some models." Quotable Moments Jason Lemkin > "Almost all the B2B software we use today is terrible now. It's terrible. I can't talk to it. I can barely bring myself to use WordPress. All these products are terrible now because AI software is so good." > "I talked to three founders over the weekend of near-public companies. Your agent is not great. You're being disrupted by the agentic layer. They are all being disrupted in real time and they are stressed as f." > "Our 10 agents generate millions of revenue, but they buy nothing. They work all weekend long. Repley, Art, Qualie, Monty—they're good kids. They create a lot of noise. But they buy nothing. Nothing except tokens." Harry Stebbings > "You only have to have a partial deceleration to your numbers. You only have to have a partial risk. You only have to see more of the value of HubSpot or DocuSign flow to an agent for these stocks to do worse." > "I have the data. I have an investment in this space. I already know the answer is yes. It's not my opinion. I have data from over 10,000 restaurants." > "YouTube is the number one way we consume video and it is entirely based on a recommendation engine. It is the best recommendation company on planet Earth. Channels don't matter anymore. Followers don't matter anymore. Nothing matters." Rory O'Driscoll > "When you are priced for perfection, anything less than perfection will be a kick in the nuts." > "Across the arc of the last 200 years since the industrial revolution, productivity gains have been good. We used to have 80% of people working on farms. We now have 4%. And we sell so much food that we're all fat." > "If the only thing that's impacted here is the B2B software industry, my suspicion is the rest of the world will go, 'Yeah, I'm willing to lose those guys.'" SaaStr AI Insider! SaaStr AI Insider! 75,640 followers + Subscribe
Operational LLM engineering is about cost predictability. Model selection matters, but token flow design determines whether your system survives real traffic.