Microsoft is launching new Responsible AI tools in Azure OpenAI Services. These applications should provide new capabilities for monitoring, evaluating, and mitigating important security issues.

Today, Microsoft announces the latest Responsible AI tools in Azure OpenAI Services. These applications provide new capabilities for monitoring, evaluating, and mitigating important security issues. In this way, they help organizations to use generative AI applications safely and without worries. Read more about the latest developments in Responsible AI in the Azure blog.

An overview of the new applications of Azure AI:

  • Safety system messages offers the user the opportunity to configure the behavior of their model as safely as possible.
  • Safety evaluations assesses the vulnerability of applications to lock-picking and content risks.
  • Risk and safety monitoring supports in understanding the input, output and end users of the applications.
  • Shields prompts makes it easier to identify AI-powered attacks before they can cause damage.
  • Groundedness detection helps detect “hallicunations” in model outputs.