article thumbnail

Govern generative AI in the enterprise with Amazon SageMaker Canvas

AWS Machine Learning

This is crucial for compliance, security, and governance. In this post, we analyze strategies for governing access to Amazon Bedrock and SageMaker JumpStart models from within SageMaker Canvas using AWS Identity and Access Management (IAM) policies. We provide code examples tailored to common enterprise governance scenarios.

article thumbnail

Centralize model governance with SageMaker Model Registry Resource Access Manager sharing

AWS Machine Learning

Customers can use the SageMaker Studio UI or APIs to specify the SageMaker Model Registry model to be shared and grant access to specific AWS accounts or to everyone in the organization. This streamlines the ML workflows, enables better visibility and governance, and accelerates the adoption of ML models across the organization.

Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Improve governance of models with Amazon SageMaker unified Model Cards and Model Registry

AWS Machine Learning

You can now register machine learning (ML) models in Amazon SageMaker Model Registry with Amazon SageMaker Model Cards , making it straightforward to manage governance information for specific model versions directly in SageMaker Model Registry in just a few clicks.

article thumbnail

Governing ML lifecycle at scale: Best practices to set up cost and usage visibility of ML workloads in multi-account environments

AWS Machine Learning

This post outlines steps you can take to implement a comprehensive tagging governance strategy across accounts, using AWS tools and services that provide visibility and control. Tagging is an effective scaling mechanism for implementing cloud management and governance strategies.

article thumbnail

Governing the ML lifecycle at scale: Centralized observability with Amazon SageMaker and Amazon CloudWatch

AWS Machine Learning

This post is part of an ongoing series on governing the machine learning (ML) lifecycle at scale. To start from the beginning, refer to Governing the ML lifecycle at scale, Part 1: A framework for architecting ML workloads using Amazon SageMaker. Configure centralized logging to API calls across multiple accounts using CloudTrail.

article thumbnail

Use the ApplyGuardrail API with long-context inputs and streaming outputs in Amazon Bedrock

AWS Machine Learning

The new ApplyGuardrail API enables you to assess any text using your preconfigured guardrails in Amazon Bedrock, without invoking the FMs. In this post, we demonstrate how to use the ApplyGuardrail API with long-context inputs and streaming outputs. For example, you can now use the API with models hosted on Amazon SageMaker.

APIs 114
article thumbnail

How Deltek uses Amazon Bedrock for question and answering on government solicitation documents

AWS Machine Learning

This post provides an overview of a custom solution developed by the AWS Generative AI Innovation Center (GenAIIC) for Deltek , a globally recognized standard for project-based businesses in both government contracting and professional services. Deltek serves over 30,000 clients with industry-specific software and information solutions.