#352 AI Agents at Work: What Actually Breaks (and How to Fix It) with Danielle Crop, EVP Digital Strategy & Alliances at WNS

DataFramed

#352 AI Agents at Work: What Actually Breaks (and How to Fix It) with Danielle Crop, EVP Digital Strategy & Alliances at WNS

DataFramedMar 23, 2026

Why It Matters

As AI agents become increasingly capable, organizations face both transformative opportunities and significant risks such as data leakage and hallucinations. Understanding how to responsibly adopt and control these tools is crucial for maintaining trust, protecting sensitive information, and ensuring that AI truly enhances strategic decision‑making rather than creating new problems.

Key Takeaways

  • Test AI agents in sandbox before enterprise deployment.
  • Expect hallucinations; always verify outputs manually.
  • Align agent data access with organization risk tolerance.
  • Use agents for repetitive tasks like competitive analysis.
  • Foster curiosity culture and lead by example with AI.

Pulse Analysis

Generative AI agents have moved from experimental prototypes to enterprise‑ready tools, prompting leaders to ask whether the technology truly adds value. Danielle Kropp emphasizes starting with the consumer versions of platforms such as OpenAI, Anthropic or Gemini, allowing teams to explore capabilities in a low‑risk sandbox. This hands‑on approach reveals what agents can automate—like drafting competitive‑analysis reports—while highlighting the need for clear guardrails. By treating the technology as a sandbox experiment, organizations can quickly gauge feasibility without committing costly infrastructure, aligning AI pilots with broader digital‑transformation goals.

Despite their promise, large language models remain probabilistic and prone to hallucinations. Kropp recounts an agent that fabricated a competitor’s acquisition, underscoring the necessity of human verification. Data security adds another layer of complexity; feeding Slack, email or proprietary documents to an external model raises trust concerns. Organizations must define their risk tolerance, decide which datasets stay in‑house, and implement strict instruction sets to limit unintended actions. Real‑world incidents with OpenClaw—such as rogue GitHub bots and accidental inbox deletions—illustrate how vague prompts can produce harmful outcomes, making robust AI governance indispensable.

Successful AI adoption hinges on culture as much as technology. Kropp builds teams that reward curiosity, demonstrates tools herself, and encourages iterative experimentation. Practical use cases—automating mundane tasks, generating podcast production pipelines, or augmenting strategic dashboards—show tangible ROI while keeping humans in the loop. Leaders should set clear, but flexible, targets for AI usage, monitor error rates, and continuously refine tone and behavior through prompt engineering. By combining disciplined risk management with a pragmatic optimism, enterprises can harness agents to accelerate decision‑making without sacrificing accuracy or security.

Episode Description

AI agents are spreading across the data and AI industry, promising to automate everything from research to outreach. At the same time, teams are learning that these tools can hallucinate, leak data, or act in surprising ways. In day-to-day work, the challenge is deciding which tasks to hand off, what data to share, and how to keep the output trustworthy. Do your agents actually add value, or just add noise? Are they running in a secured, ring-fenced environment? How do you balance playful experimentation with critical checking when an agent confidently gets a key fact wrong?

Danielle leads go-to-market strategy at WNS, Capgemini's AI transformation services arm. Previously, Danielle was Chief Data Officer at American Express and Albertsons. She also write The Remix substack on technology trends, and is an Editorial Board Member for CDO Magazine.

In the episode, Richie and Danielle explore AI agents at work, experimentation with guardrails, data privacy, access, tone controls, OpenClaw automation wins and failures, token costs, tying AI plans to P&L strategy, shifts in careers and hiring, how data teams handle unstructured data governance, and much more.

Links Mentioned in the Show:

WNS

Connect with Danielle

AI-Native Course: Intro to AI for Work

Catch Danielle speaking at RADAR—April 1

Related Episode: AI Agents Are the New Shadow IT (And Your Governance Isn’t Ready) with Stijn Christiaens, CEO at Collibra

Explore AI-Native Learning on DataCamp

New to DataCamp?

Learn on the go using the DataCamp mobile app

Empower your business with world-class data and AI skills with DataCamp for business

Show Notes

Comments

Want to join the conversation?

Loading comments...