Remove Accountability Remove APIs Remove Workshop
article thumbnail

Use the ApplyGuardrail API with long-context inputs and streaming outputs in Amazon Bedrock

AWS Machine Learning

The new ApplyGuardrail API enables you to assess any text using your preconfigured guardrails in Amazon Bedrock, without invoking the FMs. In this post, we demonstrate how to use the ApplyGuardrail API with long-context inputs and streaming outputs. For example, you can now use the API with models hosted on Amazon SageMaker.

APIs 119
article thumbnail

Your guide to generative AI and ML at AWS re:Invent 2024

AWS Machine Learning

Workshops – In these hands-on learning opportunities, in 2 hours, you’ll be able to build a solution to a problem, and understand the inner workings of the resulting infrastructure and cross-service interaction. Builders’ sessions – These highly interactive 60-minute mini-workshops are conducted in small groups of fewer than 10 attendees.

APIs 88
Insiders

Sign Up for our Newsletter

This site is protected by reCAPTCHA and the Google Privacy Policy and Terms of Service apply.

Trending Sources

article thumbnail

Building a virtual meteorologist using Amazon Bedrock Agents

AWS Machine Learning

We use various AWS services to deploy a complete solution that you can use to interact with an API providing real-time weather information. We also use identity pool to provide temporary AWS credentials for the user while they interact with Amazon Bedrock API. In this solution, we use Amazon Bedrock Agents.

APIs 80
article thumbnail

Secure a generative AI assistant with OWASP Top 10 mitigation

AWS Machine Learning

These steps might involve both the use of an LLM and external data sources and APIs. Agent plugin controller This component is responsible for the API integration to external data sources and APIs. The LLM agent is an orchestrator of a set of steps that might be necessary to complete the desired request.

APIs 100
article thumbnail

Implement RAG while meeting data residency requirements using AWS hybrid and edge services

AWS Machine Learning

The embedding model, which is hosted on the same EC2 instance as the local LLM API inference server, converts the text chunks into vector representations. The prompt is forwarded to the local LLM API inference server instance, where the prompt is tokenized and is converted into a vector representation using the local embedding model.

APIs 88
article thumbnail

Create an end-to-end serverless digital assistant for semantic search with Amazon Bedrock

AWS Machine Learning

Amazon Bedrock is a fully managed service that makes a wide range of foundation models (FMs) available though an API without having to manage any infrastructure. Amazon API Gateway and AWS Lambda to create an API with an authentication layer and integrate with Amazon Bedrock. An API created with Amazon API Gateway.

APIs 127
article thumbnail

Amazon SageMaker Feature Store now supports cross-account sharing, discovery, and access

AWS Machine Learning

SageMaker Feature Store now makes it effortless to share, discover, and access feature groups across AWS accounts. With this launch, account owners can grant access to select feature groups by other accounts using AWS Resource Access Manager (AWS RAM).