Remove APIs Remove Best practices Remove Scripts
article thumbnail

Implement secure API access to your Amazon Q Business applications with IAM federation user access management

AWS Machine Learning

Amazon Q Business provides a rich set of APIs to perform administrative tasks and to build an AI assistant with customized user experience for your enterprise. In this post, we show how to use Amazon Q Business APIs when using AWS Identity and Access Management (IAM) federation for user access management. The sample scripts samlapp.py

APIs 80
article thumbnail

Generate customized, compliant application IaC scripts for AWS Landing Zone using Amazon Bedrock

AWS Machine Learning

Amazon Bedrock is a fully managed service that offers a choice of high-performing foundation models (FMs) from leading AI companies like AI21 Labs, Anthropic, Cohere, Meta, Stability AI, and Amazon with a single API, along with a broad set of capabilities to build generative AI applications with security, privacy, and responsible AI.

Scripts 125
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Best practices for building robust generative AI applications with Amazon Bedrock Agents – Part 1

AWS Machine Learning

This two-part series explores best practices for building generative AI applications using Amazon Bedrock Agents. This data provides a benchmark for expected agent behavior, including the interaction with existing APIs, knowledge bases, and guardrails connected with the agent.

article thumbnail

Governing ML lifecycle at scale: Best practices to set up cost and usage visibility of ML workloads in multi-account environments

AWS Machine Learning

Reactive governance is for finding resources that lack proper tags using tools such as the AWS Resource Groups tagging API, AWS Config rules, and custom scripts. AWS Resource Groups tagging API – The AWS Resource Groups Tagging API lets you tag or untag resources. You should take action when resources lack necessary tags.

article thumbnail

Training large language models on Amazon SageMaker: Best practices

AWS Machine Learning

In this post, we dive into tips and best practices for successful LLM training on Amazon SageMaker Training. The post covers all the phases of an LLM training workload and describes associated infrastructure features and best practices. Some of the best practices in this post refer specifically to ml.p4d.24xlarge

article thumbnail

Best practices for TensorFlow 1.x acceleration training on Amazon SageMaker

AWS Machine Learning

Because many data scientists may lack experience in the acceleration training process, in this post we show you the factors that matter for fast deep learning model training and the best practices of acceleration training for TensorFlow 1.x We discuss best practices in the following areas: Accelerate training on a single instance.

article thumbnail

Best practices for load testing Amazon SageMaker real-time inference endpoints

AWS Machine Learning

This post describes the best practices for load testing a SageMaker endpoint to find the right configuration for the number of instances and size. Note that the model container also includes any custom inference code or scripts that you have passed for inference.