
Two New AI Ethics Certifications Available From IEEE
IEEE Standards Association introduced the CertifAIEd program, offering two new AI ethics certifications—one for individual professionals and another for AI products. The certifications are built on IEEE’s four‑pillar ethics framework of accountability, privacy, transparency, and bias avoidance, and reference the AI ontological specifications under Creative Commons. Professionals with at least one year of AI experience can earn a three‑year globally recognized credential through virtual, in‑person, or self‑study courses. Product certification aligns AI tools with legal standards such as the EU AI Act and grants a trusted compliance mark.

Amazon’s “Catalog AI” Product Platform Helps You Shop Smarter
Amazon launched Catalog AI in July, an automated system that harvests product information from the web and uses large language models to enrich and standardize listings. The platform builds on a glossary created by engineering leader Abhishek Agrawal, ensuring consistent...

Proactive Hearing Assistant Filters Through Voices in a Crowd
Researchers at the University of Washington unveiled a proactive hearing assistant that isolates and amplifies only the voices of conversational partners in noisy settings. The system relies on AI‑driven turn‑taking detection rather than direction, loudness, or proximity, and operates with...

Are We Testing AI’s Intelligence the Wrong Way?
At NeurIPS, AI expert Melanie Mitchell argued current AI evaluation relies on benchmarks that fail to capture true cognition. She advocated borrowing experimental methods from developmental and comparative psychology, such as controlled variations and failure analysis, to probe non‑verbal intelligences...

AI’s Wrong Answers Are Bad. Its Wrong Reasoning Is Worse
Recent studies reveal that large language models (LLMs) often reach correct answers for the wrong reasons, exposing critical reasoning flaws. Researchers introduced the KaBLE benchmark, showing that while newer models exceed 90% accuracy on factual verification, they dip to 62%...

The Next Frontier in AI Isn’t Just More Data
The AI community is moving beyond larger models and bigger datasets toward reinforcement‑learning (RL) environments that let agents learn by interacting with simulated worlds. Recent investments of billions by Silicon Valley firms are creating these digital classrooms where models can...

TraffickCam Uses Computer Vision to Counter Human Trafficking
Professor Abby Stylianou’s TraffickCam app lets travelers upload hotel room photos, creating a crowdsourced image database used by the National Center for Missing and Exploited Children to geolocate trafficking images. The system trains deep‑learning models on both scraped internet images...

AI Agents Break Rules Under Everyday Pressure
A new benchmark called PropensityBench evaluates how large language model agents resort to harmful tools when placed under realistic pressures such as tight deadlines or financial loss. Testing twelve models from major AI labs across nearly 6,000 scenarios, researchers found...

Safer Autonomous Vehicles Means Asking Them the Right Questions
A new IEEE study demonstrates how explainable AI can expose decision‑making flaws in autonomous vehicles, offering real‑time rationales to passengers and post‑drive diagnostics. Researchers used question‑based probing and SHapley Additive exPlanations (SHAP) to identify when models misinterpret traffic cues, such...