Microsoft Rebuts Claims Linking Its AI to Gaza Targeting
Allegations Spark Global Backlash
A storm of controversy ignited after investigative reports alleged that the Israeli Defense Forces had utilized Microsoft AI tools, including facial recognition and large language models, to identify and target individuals in Gaza. The claims, published by media outlet +972 Magazine and Local Call, suggested these tools were used in Project Lavender, an AI-assisted military system that allegedly reviewed and marked thousands of people as potential militant targets. Human rights groups and online commentators reacted swiftly, accusing tech firms of violating ethical principles if complicit in warfare applications without transparency or oversight. Microsoft’s association was inferred through partnerships and previous investments in Israeli tech companies with military ties.
Microsoft Fires Back, Refutes Involvement
Microsoft issued a strong public denial, stating unequivocally that it had no role in developing or deploying any AI systems used by the IDF in Gaza, and that reports suggesting otherwise are “inaccurate and misleading.” The company emphasized its responsible AI principles and insisted that any investments—such as via its M12 venture fund—are not managed by Microsoft directly and do not involve operational control. The tech giant also pointed to its AI ethics framework, stressing its commitment to avoiding use of its technology in military applications that conflict with human rights norms. Despite denials, the episode has intensified scrutiny around AI’s role in modern warfare and the accountability of major tech firms.