Microsoft, OpenAI, Amazon, and Google are among the 16 organizations that have signaled their willingness to police their own procedures around AI development, with the goal of limiting AI misuse and promoting responsible deployments. The companies agreed to three main goals outlined in the ‘Frontier AI Safety Commitments’ document. The companies are tasked with defining their kill thresholds over the coming months.