
A Lancet Digital Health study evaluated 20 large language models with over three million medical prompts, revealing a high susceptibility to misinformation. Neutral prompts led to a 32% acceptance rate of false information, which rose to 46% when embedded in formal discharge notes but dropped to 9% for social‑media content. Most logical fallacies actually reduced acceptance, except appeals to authority (+2.9 pp) and slippery‑slope (+2.2 pp). Larger models performed safer, yet specialized medical models lagged behind general‑purpose counterparts.

A TD Cowen note suggests Oracle could sell its health‑tech unit, formerly Cerner, to finance massive AI datacenter spend. The company faces over $500 billion in capital commitments, including a $300 billion OpenAI contract that alone may require $156 billion in capex. To free...